Be the first to know and get exclusive access to offers by signing up for our mailing list(s).

Subscribe

We ❤️ Open Source

A community education resource

How to manage AI responsibly with MLOps, model cards, and nutrition labels

Keep your AI models accurate, fair, and reliable with simple tools and practices.

Sophia Rowland, senior product manager at SAS, sat down with the All Things Open team to share how responsible AI practices, like “nutrition labels” for machine learning models, help developers build trustworthy systems and keep them healthy over time.

“The opinions expressed are Sophia Rowland’s own and do not represent the views of SAS Institute Inc.”

Subscribe to our All Things Open YouTube channel to get notifications when new videos are available.

Read more: 7 practical steps to avoiding pitfalls in AI 

Sophia describes how her work in machine learning operations (MLOps) grew from a common challenge: Teams build models but struggle to deploy and maintain them. She explains how MLOps ensures models stay accurate as data changes, avoiding the pitfalls of model decay.

Responsible AI is central to her approach. Sophia compares AI to driving a car, it’s full of benefits but not without risks. Just as traffic rules and driver’s ed reduce accidents, developers need governance and awareness to prevent harm. She highlights model cards as “nutrition labels,” offering clear, accessible details about a model’s training data, accuracy, and drift so teams can decide when to retrain or retire a model.

Bias and errors are another focus. Sophia stresses that models will get things wrong, sometimes with serious financial or human costs. Monitoring for bias and disparate impact is key to protecting users and a company’s reputation. She also points to community collaboration, sharing how the Trustworthy AI Workflow project on GitHub evolved from her All Things Open talk to help developers document and improve fairness in their models.

Key takeaways

  • Treat responsible AI like driver safety: governance and awareness are essential for reducing risk.
  • Use model cards as “nutrition labels” to monitor accuracy, data drift, and fairness.
  • Collaboration and open source tools can turn best practices into actionable workflows for any team.

Conclusion

Sophia’s insights show how responsible AI starts with awareness and continues through careful monitoring and community-driven tools. By adopting practices like model cards and open source frameworks, developers can keep models accurate, fair, and beneficial to everyone.

More from We Love Open Source

About the Author

The ATO Team is a small but skilled team of talented professionals, bringing you the best open source content possible.

Read the ATO Team's Full Bio

The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.

Want to contribute your open source content?

Contribute to We ❤️ Open Source

Help educate our community by contributing a blog post, tutorial, or how-to.

We're hosting two world-class events in 2026!

Join us for All Things AI, March 23-24 and for All Things Open, October 18-20.

Open Source Meetups

We host some of the most active open source meetups in the U.S. Get more info and RSVP to an upcoming event.