A Brutal Disruption in Image Annotation Services in 2025

Rodrigo Cardenete
Rodrigo Cardenete
Founder at BUNCH
BUNCH Blog
>
Machine Learning
Last Update:
October 10, 2024

We’ve been labeling training data for AI since 2018, and the technology has evolved faster than anyone expected. Every project feels like a natural progression in dataset types and annotation techniques, but in hindsight, what seemed like gradual change now appears monumental. The AI industry today is far more mature than it was just a few years ago.

With all the progress we’ve seen in models—media hype, capital, and sky-high expectations—demand for training data is growing exponentially. And yes, the need is real.

The age of edge cases

For most of our clients, from indie data scientists to large R&D departments, outsourcing image annotation is no longer a choice—it’s survival. It’s been ages since engineers labeled images themselves. That was a waste of resources, and the data volume was never enough for training.

This shift opened the door to outsourcing image annotation to companies like ours, enabling large-scale labeling with dedicated experts focused on the most crucial element: data quality.

Edge cases are becoming harder to find, as images grow increasingly complex—often lower in quality, more ambiguous, and increasingly extreme.

So, what happened?

In the past few months, the game has changed so fast that I’d call it disruptive. Large teams of labelers aren’t enough anymore. Computer vision models have reached a point where throwing people at the problem doesn’t work. Edge cases are so complex that only subject-matter experts can make the contextual calls that add real value—and, more importantly, maintain consistency across the dataset.

This is what's happening:

1. The training data available in the world is finite, leading to synthetic data

Data collection, once a satellite service to data labeling, has grown into an industry of its own, influencing how large-scale labeling is approached.

A surprising phenomena has surfaced: Data is Finite.

In certain industries, data is scarce because its collection is complex, slow, or expensive. Privacy concerns, especially in fields like medical imaging, further restrict its availability. Some data, once widely used for training, cannot be used anymore because it has already been exhausted or no longer meets regulatory standards.

To counter this, companies are generating synthetic data which is then labeled and fed back into the model. CGI or generative AI can simulate pretty much any environment in nature, replacing real data by adding diversity or covering rare edge cases that are difficult to find in the wild. Generated data allows for faster iterations and is in some cases far cheaper than collecting natural data.

However, these methods naturally introduce both human and AI biases. Their accuracy is often questionable, and feedback loop errors can easily slip in. Synthetic data doesn’t represent the real-world conditions models will face in real life.

2. Data is shrinking (in relative terms)

The amount of raw data being produced globally is growing, but the rate at which synthetic data is being generated by AI is increasing even faster.

At some point, there will be more images generated by AI than captured by cameras, more text produced by LLMs than written by humans, and more speech synthesized by models than spoken into microphones.

This isn’t inherently a bad thing. But when synthetic data is used to train new models, it can lead to what’s known as model collapse. Synthetic data lacks the variability needed to generalize across different scenarios. Models start learning the repetitive “patterns” of synthetic data too well, leading to overfitting, and fail to perform in real world scenarios, critical in computer vision. Models reinforce their own bias with each iteration.

There will be more images generated by AI than captured by cameras, more text produced by LLMs than written by humans, and more speech synthesized by models than spoken into microphones.

Eventually, this makes existing data unfit for training, forcing the process back to the challenge of data collection.

3. Larger models are available, resulting in "automated labeling"

Today, models have grown so large that it’s common for an older or bigger model to train a newer, smaller one, leaving humans to handle only the edge cases in industry-specific scenarios. This results in pre-labeled data that human experts merely approve, correct, or reject—rarely labeling from scratch.

You’ve probably seen this in social media, where some image annotation tools automate the whole labeling job. You may think, data labeling is therefore dead?  

But that’s fundamentally wrong.

First, if a third-party model can handle over 95% of what your model is being trained for, your model became obsolete before it was even trained.

Second, real-world experience shows that pre-annotations work only in ideal scenarios—perfectly shaped objects, universally recognized items, high-resolution images, and good lighting.

That rarely happens in nature. The images that add real value are messy, pixelated, captured from low-res cameras, and full of occlusions and ambiguities.

Perfect images don’t add value to models anymore. Bad images do.

4. An unprecedented proliferation of open-source datasets of labeled images is available to data scientists

The AI community is gaining more media attention, and the open-source ecosystem is growing stronger. Universities, private researchers, and tech giants like Google, Microsoft, and Facebook are contributing by making their internal datasets public—initiatives with unclear motivations but welcomed by the community. Government agencies and public organizations are also releasing open datasets in fields like satellite imagery, public health, and environmental monitoring. Pre-labeled datasets are now easily accessible on platforms like Hugging Face and GitHub.

This abundance of data enables engineers to bypass foundational labeling, allowing them to focus on high-value tasks where specialized expertise can significantly enhance model accuracy.

5. Computer vision models are becoming increasingly specialized, leading to a partial abandonment of generalist models

There’s growing consensus that the future won’t be dominated by one-size-fits-all models, but rather a network of smaller, specialized, and more efficient models working together. The era of general-purpose models that can understand every industry is fading fast.

We’ve seen this shift firsthand. When we started, the labeling guidelines we used with most of our clients were just a couple of pages with basic rules and examples. Over the years, it evolved to detailed manuals, covering every edge case, with labelers’ training taking weeks in most cases.

Today, it’s nearly impossible for someone outside the industry to master these guidelines. Only subject matter experts—engineers, healthcare professionals—can handle the complexity. It takes real expertise to make judgment calls on edge cases — where the real value lies.

We’re also seeing more domain-specific tools replacing the once-popular, multi-domain SaaS platforms (Labelbox, V7). The trend is clear: large companies are moving toward in-house tools, leaving behind the generalist approach.

2025 and beyond

In essence, those in data labeling services agree: the only solution to increasing complexity is specialization.

For companies like ours, this means bringing in specialized SMEs, exploring niche verticals, and adopting advanced tools.

Data scientists are seeing a consolidation of models within larger conglomerates, experimenting with constellations of interconnected models, and placing increasing focus on data collection methods and curation, both of which are becoming more vital to the process.

Needless to say, AI scientists must outsource image labeling to service companies focused on two things: data quality and specialization.

About the Author

Rodrigo Cardenete
Rodrigo Cardenete
Rodrigo is co-founder of BUNCH. With background in design, operations and development, he has taken different roles as COO and CMO

Stay in the Loop!

Subscribe to our newsletter and get the latest updates, exclusive content, and insights on Data Ops, Machine Learning, and emerging tech startups.

Related Content

Our Managed Services Model

Learn how our managed services are designed to shoulder all operational responsibilities, offering clients streamlined, process-based operations under a flat monthly fee, allowing them to focus on growth.

Ethical Supply Chain: Your Reputation Extends to Your Outsourced Teams

Explore the importance of ethical supply chain management in outsourcing. Learn how BUNCH ensures fair wages, strict working conditions, comprehensive mental health support, and end-to-end compliance to maintain integrity and enhance your brand's reputation.

How BUNCH Became a 24/7 Operations Powerhouse

Our 24/7 outsourcing services ensure seamless, efficient operations for businesses worldwide. From shift scheduling to cultural sensitivity, we guarantee continuous support in all time zones.