AI literacy: A Business Case for Responsible AI deployment

It was a pleasure to be invited to DTU Skylab, nestled in the Danish Technical University campus, to present some of my research. This was a great opportunity to highlight A Business Case for Responsible AI Development to DTU staff and students, visiting researchers, and Pioneer Center AI cohorts. 

Youtube: DTU SKYLAB CHANNEL

Definition of AI literacy


I introduced a more expansive definition of AI literacy, taken from Stanford University’s AI Literacy program

Check out this free resource to help you understand AI literacy from a teaching and learning perspective at Stanford: 

LINK: https://teachingcommons.stanford.edu/teaching-guides/artificial-intelligence-teaching-guide/understanding-ai-literacy

Building a learning organization not only promotes resilience to change and disruption, it helps uplift, empower, and inspire your workforce to work smarter, innovate, and reimagine better ways your business can operate.

I think that AI Literacy, when we expand the definition and tackle it from a learning perspective, it gives business a chance to reimagine this tool, as a tool of empowerment rather than disempowerment.
— Olivia Heslinga on the definition of AI literacy

I focused on only one of the Pillars of AI Literacy, the Ethical Pillar: How do we navigate the ethical issues around AI, highlighting the importance of involving your workforce in AI development and investing in their AI literacy.

Looking at AI from an ethical lens will not only help re-align your values, it helps raise awareness to how a powerful tool can optimize and build transparency in your organization, but can also disempower your workforce through exposure effects, biases, and critical thinking decline, if the development is not intentional and governed by the users as well.

I introduced a Think Tank that I’m part of, Open Ethics, and their AI Ethics Maturity Model to help business scale and deploy AI in an ethical way.

The framework offers a systemic approach on a set of levels: from awareness – to governance. Each level has a focus on the organization becoming more open and transparent, improving its ethical posture, and covering elements such as team, accountability, risk assessment, robustness, privacy, and policy compliance.

Feel free to book Open ethics and I, for technical and strategy consultation on how to implement this:

https://openethics.ai/oemm

Environmental Sustainability of AI

AI has a dirty secret and most likely under-reported, as more dirty coal becomes the short-lived answer to AI computation, as it further exacerbates the environmental footprint, becoming costly for society. 

Other news around AI sustainability:

AI growth means more coal and gas (Bloomberg)

  • https://www.bloomberg.com/news/articles/2025-04-10/ai-data-center-growth-means-more-coal-and-gas-plants-iea-says

I mentioned the EU Horizon Grant project: SustainML.eu, as an answer to this problem. SustainML is dedicated to creating a sustainable ML framework for Green AI. By prioritizing energy efficiency, SustainML aims to pave the way for environmentally conscious AI solutions that are both efficient and effective.

Through this program, carbontracker.info was made to help academia and businesses track their carbon footprint of their model to find ways to optimize, make greener compute, and responsible technology roadmap to scale AI in a sustainable way.

Other resources for Environmental Questions and AI:

The AI & Environment Resource Hub is a curated collection of knowledge, tools, and insights at the intersection of artificial intelligence and sustainability. Whether you're a researcher, policymaker, developer, or just curious, this hub connects you to key resources exploring how AI can drive environmental solutions.

AI Disempowerment and Empowerment:

“If businesses only looks at AI from just a technical and functional perspective, I don't think they realize how dangerous exposure effect can be to their users”

- Olivia Heslinga, DTU AI Lunch Talks

This quote was eluding to multiple studies presented from University of New Hampshire, MIT, and other scientific studies showing bias is inherent and dangerous to users if not mitigated.

Artificial intelligence and bias: Four key challenges (University of New Hampshire)

  • https://news.mit.edu/2022/machine-learning-bias-0601

Cognitive Biases in Online Opinion Platforms: A Review and Mapping

A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?

  • https://pubsonline.informs.org/doi/full/10.1287/msom.2023.0279

Subtle biases in AI can influence emergency decisions

  • https://news.mit.edu/2022/when-subtle-biases-ai-influence-emergency-decisions-1216

I didn't get a chance to mention this report done by Microsoft Research, showcasing the mechanized convergence of thought and decline of critical thinking that was self-reported by knowledge workers using AI tools for a short period of time. 

I also highlighted a recent report from Elon University, from 300+ experts on AI who study behavioral and market trends on usage of AI, showcasing the danger of decline in the following human cognitive and social traits. 

Report:

https://imaginingthedigitalfuture.org/reports-and-publications/being-human-in-2035/being-human-in-2035-research-methodologies-and-topline-findings/

I hope we begin to build a more inclusive, social, and connected society rather than outsourcing community over convenience. The statistics tends to validate these findings, showcasing the highest usage of GenAI is actually companionship.

I  mentioned throughout my talk the vulnerabilities we have with how we have built our digital infrastructure and how the black box of LLMs leaves a lot of vulnerabilities not just in harmful bias, but data pollution, loss of agency and control over our data, as well as cybersecurity risks. 

Check out my blog on cybersecurity and AI

Now more than ever the deterioration and enshittification of our digital realities & platforms will have fundamental consequences for our society. Google Deepmind did a taxonomy of GenAI misuse showing that impersonation will be the biggest challenge, as verification practices are quickly becoming outdated due to bad infrastructure and AI bad actors scaling their impact. 

US Big Tech backtracks on EU fact-checking commitments

Mapping the misuse of generative AI

Scoop: Google won't add fact checks despite new EU law

Meta says it will end fact-checking as Silicon Valley prepares for Trump

Reddit will tighten verification to keep out human-like AI bots

Fact-checkers under fire as Big Tech pulls back


But I digress, this is a larger societal question on how AI can be a mirror to what we are losing out on as a humanity and how we can realign our needs, wants, and self development in order to continue to build a society where everyone thrives. 

I ended the short presentation with plenty of time for questions, because it not only gives me clarity on my presentation, it allows me to engage with the audience.

I recommend a powerful and insightful book, Power and Progress, from Nobel Peace Prize Winners, Daron Acemoglu and Simon Johnson. 

Buy the book: https://shapingwork.mit.edu/power-and-progress/

Check out their video:

https://news.mit.edu/2024/mit-economists-daron-acemoglu-simon-johnson-nobel-prize-economics-1014

In this technological shift in innovation, we have a chance to rewrite our social contract with business and technology, with society and nature, and with ourselves. Connect with me on Linkedin for honest truths behind AI, promotion of conferences and talks bringing awareness to ethical AI deployment, Academic papers and novel methodologies to tackle society’s complex problems. 

Let's build an AI future together.

Next
Next

Intersection of AI and Diversity