How to Use AI?

Practical applications and responsible implementation

Purpose

This module provides a more technical look at how to use AI effectively and responsibly. Regardless of what field you want to pursue, understanding how to interact with AI systems is becoming an essential skill. Learning how to prompt AI effectively prevents unnecessary extra usage, understanding how AI is integrated into software and workflows helps you leverage its capabilities and identify where it's being used, and being aware of ethical concerns ensures that you use AI in a way that's responsible and doesn't compromise your personal values or society.

Prompting

Prompting is how you communicate with AI systems to get desired results. Here are some best practices for prompting [1].

  • Use detailed prompts: Clear, detailed prompts produce better results. Instead of "research AI history," try "provide links to recent studies on how AI has developed over the past decade and give a brief summary of each one."
  • Provide Context: Help the AI understand your background and purpose by explaining the objectives of the task at hand before telling it what to do. This decreases the likelihood of misinterpretation and reduces the amount of tokens used.
  • Set Constraints: Define boundaries like length, tone, style, or format. This guides the AI to produce exactly what you need rather than generic output.
  • Use Examples: Showing examples of desired output or style helps the AI understand your expectations better than describing them verbally.
  • Iterate and Refine: The first prompt may not produce perfect results. Refine your prompt based on the AI's output, clarifying instructions or constraints as needed.
  • Understand Limitations: Know what the AI system is designed to do and recognize when it might hallucinate, provide outdated information, or misunderstand context.
  • Review the Outputs: Treat AI as a tool, rather than a replacement for human work. Review the outputs critically to ensure they meet your requirements and try to understand the content outputted.

Software Integration

AI integration into software and workflows is more prevalent in the tools we use daily. Here are some examples of where it can be seen today:

  • Pre-built Tools: Companies offer AI tools for specific tasks, including content generation, image editing, data analysis, and resource management. Some require no technical knowledge to use (ex. ChatGPT), while others are integrated into existing professional tools and still require and foundational level of knowledge (ex. Photoshop, GitHub Copilot).
  • Workflow Automation: AI can automate repetitive tasks within existing workflows. For example, automatically categorizing emails, scheduling meetings, or summarizing documents.
  • Personalization: AI learns user preferences and tailors experiences accordingly. This includes product recommendations, content feeds, and navigational assistance (ex. Google Maps).
  • Real-time Processing: Modern AI systems can process information in real-time, enabling applications like live translation, transcription, facial recognition, and autonomous decision-making (ex. self driving cars).

Ethical Concerns

As AI systems become more powerful and prevalent, ethical considerations become increasingly important. Regardless of whether you use AI, it's best to be aware of potential harms and what measures can be taken to mitigate them. Additional resources to learn more are available at the end of this section.

  • Algorithmic Bias: AI systems can perpetuate or amplify biases present in training data. This can lead to unfair outcomes for underrepresented groups, including racial minorities and women [2]. Developers must actively work to identify and reduce bias, and users must be aware of potential bias in the systems they use.
  • To learn more about algorithmic bias: The documentary "Coded Bias" goes in depth about how it presents and its real-world consequences.

  • Privacy: AI systems often require large amounts of training data, from the Internet and the user. Organizations are often not transparent about how this data is used or shared, or hide it behind complex terms of services that are changed without notice [3]. There are also many debates on how copyrighted material (ex. music, art, literature) should be handled when training AI systems and how user information can be protected.
  • Malicious Content and Use: AI can be weaponized or used deceptively, generating deepfakes (AI replicas of real people), disinformation, or harmful content. This can be used for scams, harassment, and inciting violence. Safeguards are essential to prevent harmful applications, but there's a lack of comprehensive federal regulation to address them [4]. As these technologies advance, the need for robust governance becomes increasingly critical.
  • Environmental Impact: Training large AI models requires enormous computational resources, causing significant usage of water, energy, and CO2 emissions [5]. Computational operations causes the datacenters to get very hot, so large amounts of water and energy are required for cooling. This disproportionately impacts low-income communities, where datacenters are commonly located.
  • Transparency: Lack of AI disclosures means people can easily get misled by AI accounts and media. While AI content can be flagged, it's not a widespread implementation that's enforced. Children and elderly people are particularly vulnerable to misleading AI-generated content, which makes them more susceptible to manipulation [6]. This is also why teaching AI literacy is important (like this course!).
  • Sycophantic Responses: AI models are 49% more likely to affirm a user's thoughts compared to human observers [9]. On the extreme end, this can make things extremely dangerous for users who are not mentally well or are planning something dangerous, including suicide or committing crimes. In a more typical scenarios, if the user is asking for advice, the AI will respond in a way that affirms their feelings, even if it would be harmful. This endless affirmation can make people more apathetic to their own actions and make them more likely to make bad decisions.
  • Accountability: When AI systems make decisions affecting people's lives, such as in healthcare, finances, counseling, or criminal justice, there's no clear accountability for outcomes [7]. In scenarios where AI fails to perform their task, there's no policy in place to determine who is liable for damages (the AI company? Third parties? Developers?).
  • Job Displacement: As AI automates more tasks, some jobs may become obsolete, especially in tech and creative industries [8]. While AI provides quick (not necessarily great) outputs, normalization can devalue human work and creativity, especially within art, writing, and music industries.

Learning Checkpoint

Work through 10 real-world AI ethics scenarios. For each card, choose Yes if the use is responsible or No if it raises ethical concerns, then read the feedback before continuing.

References

[1] MIT Sloan Teaching & Learning Technologies, "Effective Prompts for AI: The Essentials," MIT Sloan Teaching & Learning Technologies. (n.d.) [Online]. Available: https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/. [Accessed Apr. 25, 2026].

[2] A. U. Otokiti, H. Shih, and K. S. Williams, "Gender and racial bias unveiled: clinical artificial intelligence (AI) and machine learning (ML) algorithms are fanning the flames of inequity," Oxford Open Digital Health, vol. 3, 2025, doi: 10.1093/oodh/oqaf027. [Online]. Available: https://academic.oup.com/oodh/article/doi/10.1093/oodh/oqaf027/8279897. [Accessed: Apr. 25, 2026].

[3] Federal Trade Commission, "AI (and other) companies: Quietly changing your terms of service could be unfair or deceptive," FTC Tech Blog, February 2024. [Online]. Available: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive. [Accessed Apr. 25, 2026].

[4] Ondato, "Deepfake laws explained: Global regulations and legal risks," Ondato Blog, January 2026. [Online]. Available: https://ondato.com/blog/deepfake-laws/. [Accessed Apr. 25, 2026].

[5] MIT News, "Explained: Generative AI's environmental impact," MIT News, January 2025. [Online]. Available: https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117. [Accessed Apr. 25, 2026].

[6] World Economic Forum, "How does media and information literacy need to step up its game in the AI era?" World Economic Forum, October 2025. [Online]. Available: https://www.weforum.org/stories/2025/10/media-information-literacy-ai/. [Accessed Apr. 25, 2026].

[7] "Artificial intelligence in hospitals: Legal uncertainties and emerging risks for patient safety," National Institutes of Health, July 2025. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835522/. [Accessed Apr. 25, 2026].

[8] UNCTAD, "Replacement of human artists by AI systems in creative industries," UNCTAD News, March 2024. [Online]. Available: https://unctad.org/news/replacement-human-artists-ai-systems-creative-industries. [Accessed Apr. 25, 2026].

[9] M. Cheng et al., "Sycophantic AI decreases prosocial intentions and promotes dependence," Science, vol. 391, no. 6792, 26 Mar 2026, doi: 10.1126/science.aec8352. [Online]. Available: https://www.science.org/doi/10.1126/science.aec8352. [Accessed Apr. 25, 2026].