Factories Finally Get a Common Language for Location

Factories Finally Get a Common Language for Location - Professional coverage

According to Manufacturing AUTOMATION, PROFIBUS and PROFINET International (PI), AIM-D e.V., and the OPC Foundation announced on November 4, 2025, that they have jointly created the OPC UA Companion Specification for Identification and Locating. This new specification establishes a common language for spatial intelligence, harmonizing the data model for absolute positions. The goal is to enable a unified global positioning of assets across both physical and digital environments. The spec is now freely available on the OPC Foundation’s website and is considered a key requirement for physical AI initiatives. Matthias Jöst, committee leader for omlox at PI, stated that this creates the basis for a new generation of spatially networked systems.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

Why This Matters Now

Here’s the thing: factories are getting more chaotic. Not in a bad way, but in a “we have mobile robots, stationary equipment, and smart tools all trying to work together” way. And until now, they’ve all been speaking different location languages. One system might use coordinates from an ultra-wideband tag, another from a camera system, and a third from Wi-Fi triangulation. They were all shouting positions, but nobody could understand each other.

This new spec is basically the universal translator. It doesn’t create a new tracking technology itself. Instead, it provides a common data model so that any locating technology—whether it’s omlox, GPS, or something else—can feed its data into industrial systems in a standardized way. That’s the prerequisite they’re talking about for all this cool autonomous stuff to actually work.

The Business Implications

So who wins here? Look, the big beneficiaries are companies trying to build flexible, self-organizing production lines. Think automotive plants where autonomous carts need to precisely interact with robotic arms, or warehouses where inventory is constantly on the move. The revenue model here is indirect but powerful. The spec itself is free, which is smart because it encourages widespread adoption.

The real money will be made by the system integrators and technology providers who can now build interoperable solutions without getting bogged down in custom interfaces. A company selling AMRs (Autonomous Mobile Robots) can now confidently say their bots will understand location data from a customer’s existing infrastructure. That removes a huge barrier to sale. It also future-proofs investments. You’re not locked into one vendor’s ecosystem anymore.

The Physical AI Angle

They keep mentioning “Physical AI.” What does that even mean? It’s not just an algorithm in the cloud making predictions. It’s AI that has to act in the real, physical world. And for that to work, AI needs a consistent, reliable understanding of where things are and how they’re moving.

Can you imagine trying to train a system for predictive maintenance or collision avoidance if every piece of equipment reports its location in a different format? It would be a nightmare. This specification is the foundational layer that makes Physical AI actually feasible at scale. It’s the boring, unsexy plumbing that makes the flashy AI applications possible. And honestly, that’s often where the real breakthroughs happen.

Leave a Reply

Your email address will not be published. Required fields are marked *