We ❤️ Open Source
A community education resource
Model parameters grew 3x in 10 years: Here’s what that means
How GPUs excel at parallel processing for deep learning, and why scaling smartly beats scaling bigger.
Model parameters have grown three times in 10 years, creating computational challenges and ethical concerns around transparency, bias, and how models make decisions. In this episode, Shashank Kapadia, Machine Learning Engineer at Walmart, joins the We Love Open Source podcast to share why GPUs excel at parallel processing for deep learning, how to scale AI models smartly instead of just bigger, and why carbon footprint matters when designing efficient architectures.
Shashank inched into machine learning from a math background after Harvard’s 2012 article called data science the sexiest job of the 21st century. His love for numbers and patterns aligned perfectly with scaling large scale AI solutions across recommendation systems, search engines, and retail applications at Walmart.
What is a GPU?
GPUs are processing engines like CPUs but designed for parallel operations. Originally built for gaming industry renderings that need to happen close to real time, GPUs excelled there and then revolutionized deep learning over the past decade. The calculations deep learning requires have been really accelerated by leveraging GPUs as the fundamental processing unit.
Large scale AI models face two major challenges. First, computational scale. Model parameters grew three times in 10 years alongside tremendous data volume influx. Training models and running inference requires massive computational resources. Second, ethical concerns. As models become more complex, transparency around bias and understanding how models reach decisions becomes critical.
Read more: 5 forces driving DevOps and AI in 2026
Bigger isn’t always better when scaling AI. The naive approach assumes more data equals better models, requiring more computational resources. But you can be smarter. Do you really need all the data you’re using? Will it actually improve model performance? From an architectural standpoint, are you designing models as efficiently as possible? Scale smartly so the carbon footprint from computational resources gets minimized or alleviated.
On AI replacing developers, Shashank sees AI as an assistant tool improving productivity, not a replacement. Writing unit tests or building boilerplate applications clearly benefits from AI usage, but it’s not necessarily replaceable at this point.
His advice: Use the many open source tools available, but find value in your work instead of doing fancy things for the sake of being fancy. View things from a responsible perspective. Is it happening responsibly?
Key takeaways
- GPUs revolutionized deep learning through parallel processing: Originally built for gaming renderings, GPUs excel at parallel operations that accelerate the calculations deep learning requires.
- Scale AI smartly, not just bigger: Question whether you need all your data and if it improves performance. Design efficient architectures that minimize carbon footprint.
- Ethical concerns grow with model complexity: Transparency around bias and understanding how models make decisions becomes critical as models become more complex.
Shashank’s message: Scale responsibly by questioning data needs, designing efficient architectures, and ensuring work happens from a responsible perspective.
More from We Love Open Source
- 15 open source backup solutions to protect your data
- The AI slop problem threatening open source maintainers
- Why 1.3 billion people depend on progress, not perfection
- Stop guessing, start measuring developer engagement
- 5 forces driving DevOps and AI in 2026
The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.