×

Companion Labs, Inc. has secured a major milestone in animal behavioral science with a newly patented autonomous training system. This innovation focuses on the technology titled ‘Method for autonomously training an animal to respond to oral commands’. The patent describes a sophisticated AI-driven platform designed to automate the process of teaching animals specific poses through real-time audio-visual feedback and reinforcement.

Automating Behavioral Conditioning

Abstract: One variation of a method for autonomously training animals to respond to oral commands includes: prompting a user to select a training protocol; prompting the user to record a first audio clip of the user reciting a voice command associated with a target pose within the training protocol, the user affiliated with the animal; accessing a video feed recorded by an optical sensor during a first training session; in the video feed, detecting the animal within a working field; while the animal is detected in the working field, playing back the first audio clip; in the video feed, detecting a current pose of the animal; calculating a deviation between the current pose of the animal and the target pose; in response to the deviation falling within a threshold: playing an audio clip of a secondary reinforcer; and dispensing a unit of a primary reinforcer via an integrated dispenser.

Swanson Reed Patent of the Month Recognition

The selection of Companion Labs, Inc. for the Swanson Reed Patent of the Month in March 2026 marks a transformative moment for the Zoos, Wildlife and Nature Preservation industry. Traditionally, animal training has been a labor-intensive process, prone to human error and inconsistency in timing. This patent is outstanding because it introduces a high level of clinical precision to animal husbandry, allowing for the autonomous reinforcement of behaviors without the variable of a human trainer’s presence. By removing human bias and potential stress from the equation, the system ensures that animals receive consistent, clear signals, which is vital for the mental welfare of species in captive and conservation environments.

Technically, the invention represents a remarkable fusion of ethology and advanced computer vision. The system’s ability to calculate “pose deviation” in real time is a masterclass in engineering: it effectively allows a machine to understand the nuances of animal movement. Unlike basic motion sensors, this technology distinguishes between a successful “target pose” and a near-miss, ensuring that only the correct behavioral responses are rewarded. This level of granularity is essential for training complex behaviors required for medical examinations or enrichment activities in zoo settings, making it a pioneer in digital wildlife management.

Furthermore, the integration of user-recorded audio clips preserves the social bond between the animal and its primary caretaker, even in the caretaker’s absence. This foresight addresses a common issue in wildlife preservation where animals may become desensitized to generic mechanical sounds. By bridging the gap between high-tech AI and the fundamental science of positive reinforcement, Companion Labs has created a scalable, humane solution for animal education. This achievement rightfully earns its place as an industry-leading invention, promising to enhance the quality of life for animals globally through consistent, cognitive engagement.

U.S. R&D Tax Credit Eligibility

To qualify for the R&D tax credit in the USA under Section 41, a company must satisfy the Four-Part Test. Companion Labs’ development of this autonomous training system aligns with these requirements:

  • Permissible Purpose: The project’s intent was to create a new or improved functional component for animal training, specifically improving the accuracy and autonomy of behavioral reinforcement.
  • Elimination of Uncertainty: The engineering team faced technical uncertainty regarding the optimal pose-detection algorithms required to accurately interpret the anatomy of different species under varying lighting and environmental conditions.
  • Process of Experimentation: The company engaged in a systematic process of evaluating different computer vision models and reinforcement timing through iterative testing and data analysis to find the optimal threshold for “pose deviation.”
  • Technological in Nature: The research and development relied on hard sciences: primarily computer science, robotics, and behavioral biology.

3 Practical R&D Applications

The following applications of this technology highlight how specific development activities can meet the rules of the R&D tax credit:

  • Multi-Species Pose Modeling: Conducting research and development to refine neural networks so they can accurately detect poses across diverse species (e.g., from canines to primates). This involves significant experimentation to account for varying skeletal structures and movement patterns.
  • Low-Latency Hardware Integration: Engineering and testing the integration of optical sensors with mechanical treat dispensers to ensure that the “primary reinforcer” is delivered within milliseconds of a successful pose. This requires solving technical challenges related to signal processing and mechanical response times.
  • Environmental Occlusion Mitigation: Developing and testing specialized software filters that allow the optical sensor to maintain track of an animal even in cluttered or outdoor environments, such as a zoo enclosure with shifting shadows, vegetation, or moving debris.
Contact Us

Send us a message and we will be in touch shortly!

Start typing and press Enter to search