SentiBotics

Contact us for price

Ready-to-use robotics development kit

SKU: NTRSDKSB105 Category:

Overview

Overview

 

SentiBotics development kit is designed to provide a starting point for researchers and developers, who would like to focus on robot development. SentiBotics can be also used as an educational platform at universities.

The kit includes a mobile autonomous robot with a manipulator, ROS-based SDK and full source code of all algorithms used in the robot software. The robot is able to perform navigation in a common environment, recognizes 3D objects, can grasp them and manipulate with them.

The ROS-based SDK includes 3D models of the robot and a number of programming samples for the mentioned actions. SentiBotics SDK trial is available for running in Gazebo robotics simulator.

Neurotechnology began research and developing in the autonomous robotics field in 2004. Ten years later, in 2014, Neurotechnology released the SentiBotics development kit, which includes easy-to-setup robotic hardware and original proprietary algorithms for autonomous navigation, 3D object recognition and grasping:

  • Simple assembly and setup of the robot hardware

The SentiBotics robot hardware is shipped as a set of several components (tracked platform, robotic arm, cameras etc.), which need to be assembled together and connected. All necessary instructions are included. The on-board computer already includes pre-installed Ubuntu Linux and SentiBotics software. See the robotic platform specifications for more information.

  • Software is based on ROS (Robot Operating System) framework

The software for the robot navigation, object recognition and object manipulation using the robotic arm is based on the widely used Robot Operating System. Researchers and developers may use their experience with ROS and their existing ROS-based software to work with the SentiBotics development kit. The SentiBotics kit includes ROS-based infrastructure, which allows to integrate third-party hardware parts or robotics algorithms.

  • SLAM and navigation

SentiBotics uses an original navigation algorithm based on recognizing certain elements of an environment. The robot needs to explore the environment and build the environment map at first. Users may use manual mapping process by conrolling the robot via the included control pad, or by writing a simple set of movement instructions. After the environment map is built, the robot will be able to move, navigate and operate in the environment completely autonomously. See the navigation section for more information.

  • Object recognition and manipulation

SentiBotics includes a set of original, computer vision based algorithms for object learning and recognition. Users may teach the robot to recognize a previously unknown object by placing it in front of the robot’s 3D camera and assigning an identifier to it. Then the robot will be able to recognize the learned object in the environment. Users may also specify, which objects should be grasped with the robot’s arm, and once the robot will see the specified object within the grasping range, it will try grasp it and place into the attached container. See the object learning, recognition and grasping for more information.

  • Source code for the algorithms is include

The SentiBotics kit includes full source code for the algorithms used in the robot together with working software samples for autonomous navigatio, object recognition and manipulation. SentiBotics algorithms are written in C++ and designed to be run on the specified robotic hardware, but can be be ported to other robotic platform, which includes the specified or better computer.

  • Robot hardware is based on the components available on the market

Users may purchase additional components to upgrade the robot, change its functionality or to build another robots, which will run SentiBotics software.

  • Robotics simulator

SentiBotics software can be run in Gazebo robotics simulator for algorithm evaluation and software development without using real robotic hardware. See the robotic simulator section for more information. A 30-day trial version of SentiBotics development kit is available for download.

Highlights

Highlights

  • Robot hardware is included with instructions to assemble and setup.
  • Original proprietary algorithms for autonomous navigation, 3D object recognition and manipulation.
  • Development kit is based on the open and versatile ROS (Robot Operating System) framework.
  • Source code for the robotic algorithms is included.
  • Robot hardware is based on the components available on the market and can be modified by customers.
  • Robotic simulator can be used for algorithms evaluation and software development without real robotic hardware.

SDK Contents

SDK Contents

 

The SentiBotics 2.0 Robotics Software Development Kit is designed to be used with the provided ready-to-use robotic hardware and includes:

  • Details of all algorithms used, including source codes in C++and full documentation.
  • ROS-based infrastructurethat allows users to rapidly integrate third party robotics algorithms, migrate to other hardware (or modify existing hardware) and provides a unified framework for robotic algorithm development.
  • Programming samplesthat can be used for testing or demonstration of the robot’s capabilities, including samples for:
    • Driving the robot platform and operating the robotic arm with a control pad.
    • Building a map of the environment by simply driving the robot around and using this map for autonomous robot navigation.
    • Teaching the robot to recognize objects.
    • Grasping a recognized object.
    • Delivering previously learned object from the previously-visited place.

SentiBotics 2.0 was tested on the specified robot hardware, with ROS version Indigo Igloo, deployed on Ubuntu 14.04. SentiBotics algorithms can be be ported to other robotic platform, which includes the specified or better computer and runs the specified Linux and ROS versions.

Robotic Platform Specifications

Robotic Platform Specifications

 

SentiBotics 2.0 robotic hardware consists of:

  • Tracked mobile platform(two 17 W motors, 1.72 nm torque) with accurate motor encoders.
    • The platform includes inertial measurement unit (IMU)and is capable to execute precise movements. The Teensy-3.1 based platform controller integrates the data from motor encoders and IMU, and allows to execute smooth and accurate movements of the platform.
    • Platform’s construction allows to carry up to 10 kg (22 lbs) of payload.
  • Intel NUC i5 on-board computer(RKE53427 board, Intel Core i5-3427U CPU running at 1.80GHz , 8 GB RAM, 64 GB SSD drive, 802.11n wireless network interface).
  • Battery(20 A⋅h 4-cell LiFePO4) with 4 A charger.
  • Modular robotic armwith 7 degrees of freedom, equipped with DYNAMIXEL servo motors. Each servo motor provides feedback on position, speed and force among other elements. The modular construction allows to change kinematic structure of the arm. The arm can lift up to 5 kg (1.1 lbs) payload in its default configuration.
  • Two 3D sensors:
    • short range Softkinetic DS325 for arm – range 0.15 – 1 m (0’6″ – 3’3″);
    • long range Asus Xtion Pro Live – for navigation – range 1 – 3.5 m (3’3″ – 11’6″).
  • Control padfor manual control of the robot and the robotic arm.

Note that the robotic platform is designed for in-doors use in office-like or laboratory environment.

SLAM and Autonomous Navigation

SLAM and Autonomous Navigation

 

SentiBotics navigation is based on bio-inspired SLAM algorithm. The SentiBotics robot needs to explore the environment and to generate its map for navigation.Recommendations for SLAM and autonomous navigation are:

  • Static environmentis strongly recommended, although the algorithm is relatively tolerant to moving objects.
  • Obstacles should be far enoughfrom robots trajectory.
  • 5 m (1’8″) or larger visibility zoneis recommended for the robot efficient movement.
  • Glass doorsand other transparent objects may be difficult to recognize.
  • Textured static objects and surfacesshould be present in the environment.

Note, that the robot hardware is designed for in-doors use (i.e. office-like or laboratory environment).

Simultaneus Localization and Mapping (SLAM)

Mapping is performed automatically by driving the robot throughout the environment using a control pad. Visual features are used to model and recognize places, and odometry data is used to track local robot’s motion.
A graph-like map is continuously updated based on place recognition and odometry data, simultaneusly re-localizing the robot within the map. Each node of the map represents a particular waypoint, while edges between certain points represent the transitions between the waypoints. Once the map is constructed, it is possible to specify goal vertices in the map, and navigate to them autonomously.

Autonomous Navigation

The robot navigates in the environment using a graph-based pseudometric map, which was generated during the mapping process:
Each node of the map is assosiciated with a certain waypoint in the enviroment. The robot searches for the shortest path between current waypoint and the goal waypoint and follows it.
A task to navigate to a waypoint can be selected by a user or can be a planned event.
Additional ROS mapping packages may be used for navigation.

Object Recognition, Grasping and Delivery

Object Recognition, Grasping and Delivery

Object’s appearance should be learned in advance by robot before it is sent to locate it, grasp and retrieve back.

Object Learning and Recognition

SentiBotics object segmentation algorithm tries to locate object candidates with certain properties (i.e. well separable point clusters, that lie on a planar support). Each candidate is compared with learned object models, and a label is assigned, if match is found.
An object is learned simply by placing it in front of the robot and specifying object’s name. Only one object at a time should be present in a frame during learning phase. It is recommended to enroll object from different positions and distances.

Object Grasping

User can order the robot to grasp a particular object. The robot will grasp the object after it is correctly recognized.

The SentiBotics robot can automatically determine object’s orientation and arrange its manipulator in a way best suited for grasping the object according to its position on the scene. The robot can also automatically reposition itself in order to perform the grasping task. For example, it can drive closer and/or turn to provide optimal position for picking the object.

Path planning for robot’s manipulator is performed automatically to avoid obstacles that might be between the recognized object and the manipulator. Grasping is performed by closing the gripper and measuring gripper’s position and force feedback of the finger servo motor. The grasp is assumed as successful if the gripper is not fully closed and the force large enough.

Vertical and non-vertical grasping can be performed by the robot. The grasp type is determined by the recognized object point cloud.

SentiBotics object grasping and manipulation have some requirements and constraints, which are described below.

Requirements for grasping scene

    The grasping scene should satisfy certain conditions:

  • The grasping scene should be static.
  • The planar support for the objects should be made from non-reflecting and non-transparent material (e.g. glass or marble-like material is not suitable).

Requirements for the objects on the grasping scene

The objects on the grasping scene should also satisfy certain conditions:

  • The objects should be positioned:
    • on a sufficientlylarge planar support (e.g. ground, or table-like platform);
    • in front of the robot within1 m (3’3″) from the short range camera;
    • not too close to each otherso that they could be easily separable as point clusters.
  • Objectsshape should be approximable by oriented bounding box.
  • Objects should haveappropriate size to fit in the gripper.
  • Objects should requireno specific grisps (e.g. currently it is impossible to grasp a cup by its handle).
  • Objects should be made fromnon-reflecting and non-transparent
  • Each object can weight up to5 kg (1.1 lbs). Lighter objects are recommended for manipulator’s servo motors longer service.

Object delivery

SentiBotics autonomous object delivery includes autonomous navigation, object recognition and object grasping functionalities. The robot performs this sequence of actions after it receives a delivery command:
The robot navigates through its previously-mapped locations until it reaches the location, where the specified object was recognized.
The robot tries to directly recognize the assigned object, and reposition itself until recognition occurs, and grasping is possible.
The object is grasped using the robotic arm and placed into attached box.
The robot delivers the object to the location where the delivery command was issued.

Robotics Simulator and Video Tutorials

Robotics Simulator and Video Tutorials

 

Gazebo robotics simulator can be used for developing and testing SentiBotics-based software without using real robotic hardware.
A 30-day trial version of SentiBotics development kit is available for evaluating SentiBotics algorithms in the Gazebo simulator.
The videos below contain several tutorials about using SentiBotics in the Gazebo simulator:

Licensing Model

Licensing Model

 

Please contact us for information about licensing SentiBotics robotics development kit.

Related Products