SentiBotics development kit is designed to provide a starting point for researchers and developers, who would like to focus on robot development. SentiBotics can be also used as an educational platform at universities.
The kit includes a mobile autonomous robot with a manipulator, ROS-based SDK and full source code of all algorithms used in the robot software. The robot is able to perform navigation in a common environment, recognizes 3D objects, can grasp them and manipulate with them.
The ROS-based SDK includes 3D models of the robot and a number of programming samples for the mentioned actions. SentiBotics SDK trial is available for running in Gazebo robotics simulator.
Neurotechnology began research and developing in the autonomous robotics field in 2004. Ten years later, in 2014, Neurotechnology released the SentiBotics development kit, which includes easy-to-setup robotic hardware and original proprietary algorithms for autonomous navigation, 3D object recognition and grasping:
The SentiBotics robot hardware is shipped as a set of several components (tracked platform, robotic arm, cameras etc.), which need to be assembled together and connected. All necessary instructions are included. The on-board computer already includes pre-installed Ubuntu Linux and SentiBotics software. See the robotic platform specifications for more information.
The software for the robot navigation, object recognition and object manipulation using the robotic arm is based on the widely used Robot Operating System. Researchers and developers may use their experience with ROS and their existing ROS-based software to work with the SentiBotics development kit. The SentiBotics kit includes ROS-based infrastructure, which allows to integrate third-party hardware parts or robotics algorithms.
SentiBotics uses an original navigation algorithm based on recognizing certain elements of an environment. The robot needs to explore the environment and build the environment map at first. Users may use manual mapping process by conrolling the robot via the included control pad, or by writing a simple set of movement instructions. After the environment map is built, the robot will be able to move, navigate and operate in the environment completely autonomously. See the navigation section for more information.
SentiBotics includes a set of original, computer vision based algorithms for object learning and recognition. Users may teach the robot to recognize a previously unknown object by placing it in front of the robot’s 3D camera and assigning an identifier to it. Then the robot will be able to recognize the learned object in the environment. Users may also specify, which objects should be grasped with the robot’s arm, and once the robot will see the specified object within the grasping range, it will try grasp it and place into the attached container. See the object learning, recognition and grasping for more information.
The SentiBotics kit includes full source code for the algorithms used in the robot together with working software samples for autonomous navigatio, object recognition and manipulation. SentiBotics algorithms are written in C++ and designed to be run on the specified robotic hardware, but can be be ported to other robotic platform, which includes the specified or better computer.
Users may purchase additional components to upgrade the robot, change its functionality or to build another robots, which will run SentiBotics software.
SentiBotics software can be run in Gazebo robotics simulator for algorithm evaluation and software development without using real robotic hardware. See the robotic simulator section for more information. A 30-day trial version of SentiBotics development kit is available for download.
The SentiBotics 2.0 Robotics Software Development Kit is designed to be used with the provided ready-to-use robotic hardware and includes:
SentiBotics 2.0 was tested on the specified robot hardware, with ROS version Indigo Igloo, deployed on Ubuntu 14.04. SentiBotics algorithms can be be ported to other robotic platform, which includes the specified or better computer and runs the specified Linux and ROS versions.
SentiBotics 2.0 robotic hardware consists of:
Note that the robotic platform is designed for in-doors use in office-like or laboratory environment.
Object’s appearance should be learned in advance by robot before it is sent to locate it, grasp and retrieve back.
SentiBotics object segmentation algorithm tries to locate object candidates with certain properties (i.e. well separable point clusters, that lie on a planar support). Each candidate is compared with learned object models, and a label is assigned, if match is found.
An object is learned simply by placing it in front of the robot and specifying object’s name. Only one object at a time should be present in a frame during learning phase. It is recommended to enroll object from different positions and distances.
User can order the robot to grasp a particular object. The robot will grasp the object after it is correctly recognized.
The SentiBotics robot can automatically determine object’s orientation and arrange its manipulator in a way best suited for grasping the object according to its position on the scene. The robot can also automatically reposition itself in order to perform the grasping task. For example, it can drive closer and/or turn to provide optimal position for picking the object.
Path planning for robot’s manipulator is performed automatically to avoid obstacles that might be between the recognized object and the manipulator. Grasping is performed by closing the gripper and measuring gripper’s position and force feedback of the finger servo motor. The grasp is assumed as successful if the gripper is not fully closed and the force large enough.
Vertical and non-vertical grasping can be performed by the robot. The grasp type is determined by the recognized object point cloud.
SentiBotics object grasping and manipulation have some requirements and constraints, which are described below.
The objects on the grasping scene should also satisfy certain conditions:
SentiBotics autonomous object delivery includes autonomous navigation, object recognition and object grasping functionalities. The robot performs this sequence of actions after it receives a delivery command:
The robot navigates through its previously-mapped locations until it reaches the location, where the specified object was recognized.
The robot tries to directly recognize the assigned object, and reposition itself until recognition occurs, and grasping is possible.
The object is grasped using the robotic arm and placed into attached box.
The robot delivers the object to the location where the delivery command was issued.
Gazebo robotics simulator can be used for developing and testing SentiBotics-based software without using real robotic hardware.
A 30-day trial version of SentiBotics development kit is available for evaluating SentiBotics algorithms in the Gazebo simulator.
The videos below contain several tutorials about using SentiBotics in the Gazebo simulator:
Please contact us for information about licensing SentiBotics robotics development kit.