ETV Bharat / technology

Explained | How CynLr's Object Intelligence Brings Human-Like Learning To Robots

Bengaluru-based CynLr has launched an Object Intelligence platform enabling robots to learn unfamiliar objects in seconds, targeting flexible, software-defined manufacturing.

Can Robots Learn Like Babies? CynLr Thinks So
Can Robots Learn Like Babies? CynLr Thinks So (Image Credits: CynLr)
author img

By Anubha Jain

Published : February 23, 2026 at 6:06 PM IST

8 Min Read
Choose ETV Bharat

Bengaluru: Robots today can reliably handle only those objects they are explicitly programmed or trained to manage — a process that often requires months of data collection and fine‑tuning. Challenging this limitation, CynLr (Cybernetics Laboratory), a Bengaluru‑based deep‑tech startup, has unveiled its groundbreaking Object Intelligence (OI) Platform after five years of intensive research and development, supported by its Swiss R&D entity and US business development centre. CynLr’s platform is the first commercially ready OI solution to emerge from India.

At the heart of the platform is on‑the‑fly learning, an ability that mirrors how a human infant interacts with the world. Robots can learn to pick up completely unfamiliar objects within just 10 to 15 seconds. CynLr’s robots intuitively learn to handle objects that are transparent, reflective, or irregular in shape—from a glass bottle wrapped in plastic to everyday household items, metallic car parts, and electronics—unlocking what many consider the holy grail of robotics and intelligence.

This intuitive and adaptive capability significantly reduces the need for extensive training data, large‑scale computing infrastructure, and data centres, unlike conventional robotic automation. By enabling real‑time learning and adjustment, the Object Intelligence (OI) platform allows robots to manage unfamiliar objects without exhaustive prior training.

CynLr Claims Breakthrough in Real-Time Robotic Learning with OI System
CynLr Claims Breakthrough in Real-Time Robotic Learning with OI System (Image Credits: CynLr)

As CynLr Founder Gokul explained, “A year-old baby cannot label objects, yet can instinctively reach, grasp, and explore them. This intuitive adaptability and the ability to act without prior familiarity are what CynLr seeks to replicate in machines. The next fifty years will be about cognition. Machines that observe, reason and adapt.”

The Founders’ Journey: From Industrial Challenges to Deep Tech Innovation

Gokul’s journey began in 2008-09 as an undergraduate, working on a coconut-harvesting robot and realising the limitations of existing robotic arms, vision systems, and intelligence. He then joined National Instruments (NI) to develop skills in machine vision and control, focusing on manipulation, an area where conventional identification-based vision systems failed.

At NI, he and his partner Nikhil Ramaswamy solved over 40 previously unsolved industrial automation problems across India, Arabia, Russia, Taiwan, and beyond. After three years of consulting post their NI career and validating these solutions, they saw the commercial potential.

CynLR's  Gokul NA and Nikhil Ramaswamy
CynLr's Gokul NA and Nikhil Ramaswamy (Image Credits: CynLr)

In 2019, they founded CynLr to productize their technology, building scalable systems for advanced object intelligence and adaptive robotics.

Traditional Robotics vs Object Intelligence (OI)

In an exclusive interview with ETV Bharat, Gokul explained that while traditional automation is rigid and data-driven, true OI is dynamic, adaptive, and essential for building genuinely autonomous robots. “Our technology is being deployed in areas that were previously considered too complex to automate, especially assembly tasks. Even something as simple as picking a screw and fastening it without misalignment remains a major challenge for conventional robots, particularly across varying threads and orientations," he said.

ALSO READ: The Rise Of Autonomous Robots: From Ancient Inventions To Future Game-Changers

"Traditional automation requires tightly controlled environments, heavy engineering, and constant maintenance to prevent failure from minor variations. With our system, robots can instantly adjust and adapt to changes without hyper-engineering every corner case. This eliminates environmental rigidity and reduces operational pain points, marking a fundamental shift in how assembly automation is achieved,” Gokul noted.

Robotic Suite: Cyro, CyNoid and the Upcoming Mantroid

The platform is form‑factor agnostic and can power industrial robotic arms, multi‑arm systems, or humanoid robots, with the capability to pick up any object it encounters. To translate its Object Intelligence into deployable systems, the company has developed a dedicated robotic suite. At present, CynLr offers two main products — Cyro and CyNoid — while an open‑hardware platform called Mantroid is set to be released soon. Mantroid serves as a “sandbox platform” that frees customers from fixed humanoid form factors, enabling them to custom‑build robotic designs tailored to their specific needs.

At the crux of CynLr’s perception platform is CLX, an OI-enabled Vision System. CLX can instantly perceive an unseen object, deconstruct it and trigger an intuition for the robot to learn how to manipulate it experimentally.

Gokul said, “A human-like dynamic system has been built in the CLX system. Our approach imitates how human vision actually works. Unlike conventional AI systems that rely heavily on colour images, the human eye first detects motion, rather than colour. Depth perception is then constructed using multiple cues, six of which are primitive and instinctive, such as motion and eye convergence.”

He further said that most cameras today cannot replicate this because moving optics and dynamic depth construction are computationally expensive. Instead, machines often rely on artificial methods like pattern projection, which do not truly interpret the scene. Humans also perceive depth through motion, like in 2D cartoons, where speed differences create a sense of distance.

Can Robots Learn Like Babies? CynLr Thinks So
Can Robots Learn Like Babies? CynLr Thinks So (Image Credits: CynLr)

Colour is interpreted contextually afterwards, which is why lighting illusions can trick us. By mimicking these biological mechanisms, our vision system achieves robust, human-like dynamic perception, allowing it to function reliably across lighting changes, reflections, and unfamiliar object variations, he added.

CynLr’s recent neuroscience-linked research collaboration with the Indian Institute of Science (IISc) helps in building robotic systems that function closer to brain-like perception and adaptation. Gokul said, “Instead of relying on data-heavy, conventional machine learning, we model how the human brain and neurons process visual and spatial information.”

Elaborating with an example, he added, “When we tilt a coin, the reflections constantly change, yet our brain seamlessly combines those shifting cues to understand it. This visual stitching is fundamental to human perception and essential for true OI. Replicating it is not just a software task but a hardware challenge as well. It requires redesigning sensors, controllers, robotic arms, grippers, and vision systems. Since existing platforms are not built for this approach, we developed many of these components from scratch to enable adaptive intelligence.”

From Customer Deployments to the Universal Factory Vision

Mentioning the industries that benefit most with the CynLr platform, Gokul said the strongest commercial traction for CynLr’s OI platform is in B2B manufacturing, with pilot deployments in the luxury automotive segment, such as Audi and semiconductor automation companies. These applications span automotive assembly lines and semiconductor fabrication labs, where robots can take over repetitive, manual tooling tasks.

As part of its end goal of building a Universal Factory, the larger vision is a software-defined factory where manufacturers can shift from one product model to another through code updates without physically reconfiguring assembly lines when equipped with CynLr robots. This flexibility is especially valuable in automotive, electronics gadget assembly, semiconductor fabs, and lab automation.

ALSO READ: AI, AR, & Robots: How Cutting-Edge Tech Is Powering India's Bullet Train Project

Switching between pre-trained tasks is instantaneous, similar-task calibration takes about an hour, and entirely new task training can range from one week to three months. In many complex assembly scenarios where automation previously did not exist, OI is not just improving efficiency; it is enabling automation for the first time. Simplification of manufacturing processes and the transition toward software-defined factories are the key takeaways for companies considering the integration of CynLr’s OI platform, Gokul added.

Reducing Dataset and Compute Requirements

In any assembly task, picking, orienting, and placing are key steps. CynLr has made object picking instantaneous, drastically reducing the need for massive datasets - currently down to 30 per cent of traditional requirements, with compute and energy usage similarly cut. Orientation and placement still need training, but over the next 3–4 years, the goal is to reduce dataset and compute needs by up to 75–80 per cent.

Scaling to produce one robot per day by 2028 faces challenges in hardware, software, and the supply chain. Customers want ready-to-use solutions, so productisation and platformisation are critical. Customising components, investing in manufacturing capabilities, and creating intuitive interfaces for “touch-and-teach” operation are key bottlenecks. Once solved, customers could reprogram robots for new tasks within a month, drastically accelerating adoption and scaling volumes, Gokul said.

Bridging Physical Interaction and AI

Gokul emphasised that CynLr’s technology is not limited to a narrow niche; it addresses general-purpose robotic automation with object intelligence, bridging the gap between physical interaction and AI understanding. Current AI lacks real-world physical reasoning, such as understanding “soft” versus “hard,” limiting its effectiveness; integrating physical experience is key to holistic intelligence.

CynLr Claims Breakthrough in Real-Time Robotic Learning with OI System
CynLr Claims Breakthrough in Real-Time Robotic Learning with OI System (Image Credits: CynLr)

For manufacturers, especially in automotive, this technology allows flexible, software-defined production lines, enabling rapid adaptation to changing consumer preferences and reducing the constraints of traditional, fixed manufacturing setups. Beyond industry, the platform represents a foundational infrastructure for the next wave of deep tech, similar to how GPS, mobile networks, and app ecosystems reshaped markets, making it a compelling opportunity for investors in transformative technology, he emphasised.

Five-Year Roadmap: Revenue, IPO, and Neural Representation into AI

Gokul outlined that over the next five years, CynLr aims to scale toward profitability or an IPO, targeting around $40 million in revenue to break even. The vision is to produce up to 300 robots per day and build an object and task store, where customers can upload and share task models globally, enabling robots anywhere to download learned skills.

Ultimately, the goal is to bring a neural representation of the physical world into AI, creating fully modular, software-defined factories where production can switch between products simply by updating software and transforming manufacturing flexibility at scale, he added.

If CynLr’s vision succeeds, OI could mark a decisive shift from programmed automation to truly cognitive machines, reshaping the future of industrial robotics.

ALSO READ: Era Of Autonomous AI And Digital SideKicks: What Are AI Agents And Why Should You Care?