Be the first to know and get exclusive access to offers by signing up for our mailing list(s).

Subscribe

We ❤️ Open Source

A community education resource

7 practical steps to avoiding pitfalls in AI 

How to manage risks when adopting AI language models in your organization.

AI is everywhere, shaping the systems and tools we use daily, but with that comes new risks we don’t always see coming. In her lightning talk at All Things Open, Sophia Rowland from SAS shares insights on the potential pitfalls of language model adoption and how organizations can build AI systems more responsibly.

“The opinions expressed are Sophia Rowland’s own and do not represent the views of SAS Institute Inc.”

Subscribe to our All Things Open YouTube channel to get notifications when new videos are available.

Sophia begins with real-world examples showing the unexpected consequences of AI missteps. From an event poster promising “catgagating” that never appeared to a chatbot that accidentally cost an airline $812, these stories highlight that AI errors carry financial, reputational, and operational costs. She reminds developers that even well-intentioned AI applications, like automated email generation, can backfire without careful review.

She then outlines seven practical steps to reduce risks when building with language models. First, clearly define your system’s purpose, limitations, and potential consequences. Sanitizing user inputs is essential to protect sensitive data and prevent malicious manipulation, like prompt injection. Choosing the right model for your use case involves considerations such as deployment location, language support, model size, and generalizability.

Read more: Build your own private AI assistant with Bookshelf

Sophia emphasizes adding context through techniques like retrieval augmented generation (RAG) and evaluating outputs for relevance, accuracy, and toxicity. Red teaming, where developers intentionally try to break their system, helps identify vulnerabilities before they reach users. Finally, she notes the importance of preparing for inevitable errors, providing ways for users to report issues and ensuring systems are resilient when AI responses go wrong.

Key takeaways

  • AI is powerful, but it isn’t perfect. Language models can make mistakes that affect finances, reputations, and operations, and understanding those risks is the first step to responsible adoption.
  • Building responsibly starts with preparation. Defining system purpose, sanitizing inputs, and selecting the right model help developers reduce errors before they happen.
  • Catching problems early matters. Evaluating outputs, using red teaming, and providing ways to report errors give teams the ability to correct issues and keep AI systems reliable.

Conclusion

Building AI responsibly requires both awareness and preparation. Sophia’s talk shows that by understanding the risks and taking proactive steps, developers can harness the power of language models while minimizing potential harm.

More from We Love Open Source

About the Author

The ATO Team is a small but skilled team of talented professionals, bringing you the best open source content possible.

Read the ATO Team's Full Bio

The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.

Want to contribute your open source content?

Contribute to We ❤️ Open Source

Help educate our community by contributing a blog post, tutorial, or how-to.

We're hosting two world-class events in 2026!

Join us for All Things AI, March 23-24 and for All Things Open, October 18-20.

Open Source Meetups

We host some of the most active open source meetups in the U.S. Get more info and RSVP to an upcoming event.