Simulating robots with MORSE

It is quite challenging and costly to build up a robot lab, especially if you just want to conduct some experiments with sensors and a moving platform. In todays search of affordable robot platforms, I discovered MORSE, a simulation platform built on the blender game engine ( This article will show how to set it up, select an environment, add sensors and read from them.

It already has the infrastructure, several environments and pre-built robots, sensors (camera, GPS, laserscanner, IR, etc.) and actuators to play with, and it can be installed directly via apt (Ubuntu + Debian). It took me less than an hour to skim through the tutorials, set up a basic environment, add a laser-range sensor to an existing robot and visualize the results, pretty amazing! (You can find all of my project files here:





sudo apt-get install morse-simulator

I first ran

morse create my_first_sim

This command creates the following directory structure:

├──    <-- creates environment, places robot, etc.
├── scripts
│   └──   <-- is a socket-client to the robot
└── src
 └── my_first_sim
 ├── builder
 │   └──

Now, according to the tutorial, I run the simulation:

morse run my_first_sim

Which presents the simulation window, and the robot can be steered using the cursor keys.



Pretty awesome already!

Changing the environment

We open the file, select one of the environments listed in, and change the line

env = Environment('sandbox', fastmode = False)

to whatever environment we like. I want to test a laserscanner later on, so I decided on indoors-1/boxes.

Adding sensors

A robot is only as good as it’s sensors, so I added a laserscanner. There are three pre-built available (Sick, Hokuyo, SickDMRS, see, which you can easily add to your robot (in, the translation is relative to the robot):

sick = Sick()
sick.translate(0.5, 0.0, 0.5)
sick.rotate(0.0, 0.0, 0)

The default robot would tilt forward when I attached the scanner to it (physics ftw!), so I used another default plattform, the ATRV by changing the robot-definition (in

robot = ATRV()

Easy as that! With the changed environment, robot and the laserscanner, the simulation now looks like this:


Reading sensors

Now, the client comes into play (scripts/ The base structure has already been set up, we just need to add some lines (marked as comments at the end of each line I added), namely telling the script to connect to „sick“, and then reading some values.

The client is a separate program and needs to be started additionally to the simulation! It connects only via sockets!


#! /usr/bin/env python3
Test client for the  simulation environment.

This simple program shows how to control a robot from Python.

For real applications, you may want to rely on a full middleware,
like ROS (

import sys
import time

    from pymorse import Morse
except ImportError:
    print("you need first to install pymorse, the Python bindings for MORSE!")

print("Use WASD to control the robot")

with Morse() as simu:

  motion = simu.robot.motion
  sick = simu.robot.sick                        # ADDED
  pose = simu.robot.pose

  v = 0.0 # velocity
  w = 0.2 # angular velocity

  while True:
      key = input("WASD?")
      if key.lower() == "w":
          v += 0.1
      elif key.lower() == "s":
          v -= 0.1
      elif key.lower() == "a":
          w += 0.1
      elif key.lower() == "d":
          w -= 0.1

      # here, we call 'get' on the pose sensor: this is a blocking
      # call. Check pymorse documentation for alternatives, including
      # asynchronous stream subscription.
      print("The robot is currently at: %s" % pose.get())
      sickdata = sick.get_local_data().result()                 # ADDED
      print(sickdata.keys())                                    # ADDED
      print(sickdata["range_list"])                             # ADDED

      motion.publish({"v": v, "w": w})



In my client (see github-repo:, I stripped the blocking input and wrote a small class, that periodically gets the sensor readings and displays them using matplotlib (usually I would use cv2, but sadly the client needs to be in python3, and it’s a real pain to build opencv2 for python3). Later on, I want to experiment with some localization/navigation/map-building algorithms using this setup.


TobiasWeis | 25. November 2016

Leave a Reply