Great Hardware Needs Great Software
We have a broad and powerful suite of software that works together to make our robots, engaging, highly customisable and intuitive to use. Developed in house, some of our software leverages, expands upon, and combines a number of open-source projects. Credits here
All of our robots, with all of their unique hardware and functionality requirements all run on one unifying software framework. Tritium. Developed in-house and refined for over 12 years. It is what drives all of our robots. And there are many things that make it special.
First and foremost, Tritium is designed to be remotely operated. It is very impractical to plug a keyboard, monitor and mouse into a 6ft humanoid robot every time a small change is needed. Even worse for a fleet of robots. So Tritium is designed to work inside a browser, from any internet connected machine with the correct log-in credentials. Making it a walk in the park to check, maintain, update, adjust the robot, from any machine you like. So changing the robot's behaviour, adding content, or starting a telepresence session (powered by TinMan), can all happen from anywhere, on the fly, even whilst the robot is running and wowing the public. This opens up many options for remote support and diagnostics from Engineered Arts HQ when required. So when facing a technical difficulty, downtime is kept to an absolute minimum.
Tritium, is designed so a robot's functionality can be built up in little software blocks, or nodes. These nodes can contain code from almost any programming language, and run locally or using a cloud based service. The code is un-modified, but stored in a wrapper that Tritium understands. Meaning that all software no matter how disparate the language, can communicate and interact with each other. Allowing you to write in code you understand, or plug-in software from a specific supplier, without fear of compatibility issues.
No matter how apparently simple, all robots are complicated. For one to work well and make good decisions, robots need to have access to all available data from sensors, encoders, motors, network traffic, code, video streams, microphone inputs, physical conditions. The robot needs to be able to communicate with all of these things too. They all require differing amounts of time to receive, send and respond to data. In an environment where waiting for data can mean the difference between shaking somebody's hand or slapping them in the face, there needs to be a system in place that can accommodate for these variations. Tritium uses a smart buffering system, allowing our robots to make timely decisions quickly and safely.
Tritium is flexible and can be used to operate almost any hardware component, which is why it is used on RoboThespian, SociBot, Mesmer, custom commissions, and even robotic theatres, with DMX lighting and projectors. Tritium is built with hardware flexibility in mind, so it could be run on almost any hardware platform.
With so many different software nodes, device requests, and demands. There are going to be conflicts. Where a robot is being ask to do two or more different things simultaneously, things that just aren't possible to do at the same time. This results in undesirable and often unpredictable behaviour. It can be much more complicated than this, but Tritium is the means by which one demand is prioritised over another and conflicts resolved. This means from the outside, robots will behave in an expected and pleasing manner.
Virtual Robot offers a powerful way of programming content for a robot with an intuitive, multi-layered graphical user interface. Designed by Engineered Arts, this web-enabled software runs on HTML5 with WebGL, requires no installation, and is cross-system compatible. The robot can be posed using the mouse, simply by clicking and dragging limbs, head, hands, eye graphics, ...etc. Pre-stored poses, sounds, animations, or even sub-sequences can be loaded from the library, and a drag-and-drop timeline allows you to control and edit all elements of your performance. Users looking for more in-depth information can access sensor data or look at the explicit command output from the timeline sequences.
Through Virtual Robot, now multiple users can create content for your robot simultaneously, whether this is in a teaching environment, at a science education facility - or even remotely, via the internet. Remote users can store their routine on the cloud server and recall it on a real robot using an associated QR code. Note that stored routines can be filtered through a local administrative account, to arrange precedence or scan for unsuitable content.
The robots at Engineered Arts are designed to be public-facing and user-friendly. All robots come with a simple, graphical-based touch-screen interface installed, which can be used to access most of the important functionality of the robot. The user can select from multiple languages, and a locked administrative section allows higher-level users to view hardware diagnostics, control the public-facing touchscreen pages, and edit the robot's content.
Users can create their own routines, choosing from a large array of pre-programmed poses, sound effects and eye graphics, and incorporating text-to-speech functionality in multiple languages.
View the streaming camera output from the on-board depth sensor and chest-mounted RGB camera, with data overlay from our pose recognition system.
Play a pre-recorded sequence from our extensive library of entertaining and informative robot routines. The sequence library can be fully managed from the touchscreen interface by an administrator.
Restricted-access screens allow further control of the robot for higher-level and supervisory users. Here you can not only view sensor data and check diagnostic information, you can also manage the robot's content and upload or remove sequences at the click of a button.
Control your robot directly at the press of a button. Move the robot's head and eyes, change its LEDs and animation graphics, select some pre-configured poses and movements, and choose from a variety of greetings and responses. You can also see the camera feed from the high-definition head-mounted camera.
Imagine a robot that doesn’t behave like a robot.
A robot with natural intelligence. A robot that captivates and inspires wherever it goes. Introducing TinMan. Telepresence, by Engineered Arts. Now you can be the robot from anywhere in the world. TinMan hands you real-time control of RoboThespian , SociBot or Mesmer robot. Using inbuilt cameras and microphones, you can control its gaze, enter a natural conversation, and trigger content on the fly. Automated features like maintaining eye contact keep interaction compelling and believable. Allowing you to concentrate on the fun stuff. BLOWING PEOPLE’S MINDS! TinMan offers unrivalled levels of engagement and provides lasting positive memories about your business or attraction. Complex technology and clever design, disappear into simple intuitive use. Enabling truly breathtaking interaction, with no distractions, while instant visual feedback of the robot keeps you looking the part. This powerful software is packed into the simplicity of a browser. So no need to install anything. Designed from the ground up, TinMan works on desktop, tablet and even smartphone. So you can beam into any of your robots. Just log in and go.
All our robots come with a large amount of pre-installed content, ranging from simple eye animations and sound effects through to entire performance routines, using speech, song and movement. Accessing and editing this content has never been easier. Our browser-based asset management system allows you to drag and drop sequence files between your local repository and the robot, and edit the sequences in-browser.
All our robots are provided with a wide array of sensors, including cameras, microphones and position encoders, ensuring they are responsive and interactive machines. This data is available in real-time in your browser - on our sensor data page, you can compare inputs and outputs, observe response times and use this information when creating your own control functions.
Visual feeds are also available, from the high-resolution head camera, the RGB chest camera and the chest-mounted depth sensor. You can also view SHORE outputs overlaid on the image data.
Sequences can be a powerful method of delivering content, however, they are not responsive and do not utilize the full capabilities of RoboThespian. For users wishing to delve more deeply into the possibilities afforded by our robots, we provide an integrated developer environment where you can use Python to create your own control functions and subroutines for the robot. Use the robot's many sensors to register data from the environment; control the hardware to make the robot truly responsive.
Kaiserslautern Technical University Developing Using our IDE
Almost all of what our robots do can be controlled remotely over the web, thanks to the wonder of RESTful API. You can use our software and interface to do this. Or you can make your own interface with custom commands, read outs and UI. All remotely!