amino  1.0-beta2
Lightweight Robot Utility Library
Installation

This file describes the installation process for Amino.

Setup and Dependencies

The following dependies are required or optional. Please see below for the corresponding list of Debian and Ubuntu packages. Also, please consult the errata notes below as some package versions have missing features or bugs that require workarounds.

Required Dependencies

These dependecies are required to compile and use Amino:

Optional Dependencies

These dependecies are optional and may be used to enable additional features in Amino:

  • Robot Models (URDF, COLLADA, Wavefront, etc.):
  • GUI / Visualization:
    • OpenGL
    • SDL2
    • plus everything under Robot Models, if you want to visualize meshes, URDF, etc.
  • Raytracing (everything under Robot Models, plus):
  • Motion Planning:
    • FCL
    • OMPL
    • plus everything under Robot Models, if you want to handle meshes, URDF, etc.
  • Java Bindings
    • Java SDK
  • Optimization, any or all of the following:

Debian and Ubuntu GNU/Linux

Most the dependencies on Debian or Ubuntu GNU/Linux can be installed via APT. The following command should install all/most dependencies. For distribution-specific package lists, please see the files under ./share/docker/ which are used for distribution-specific integration tests.

sudo apt-get install build-essential gfortran \
     autoconf automake libtool autoconf-archive autotools-dev \
     maxima libblas-dev liblapack-dev \
     libglew-dev libsdl2-dev \
     libfcl-dev libompl-dev \
     sbcl \
     default-jdk \
     blender flex povray ffmpeg \
     coinor-libclp-dev libglpk-dev liblpsolve55-dev libnlopt-dev

Now proceed to Quicklisp Setup below.

Installation Errata

Some package versions are missing features or contain minor bugs that impede usage with amino. A listing of issues encountered so far is below:

OMPL Missing Eigen dependency

Version 1.4.2 of OMPL and the corresponding Debian/Ubuntu packages are missing a required dependency on libeigen. To resolve, you may need to do the following:

  1. Manually install libeigen:
     sudo apt-get install libeigen3-dev
    
  2. Manually add the eigen include directory to the ompl pkg-config file. The ompl pkg-config file will typically be in /usr/lib/x86_64-linux-gnu/pkgconfig/ompl.pc. The eigen include path will typically be /usr/include/eigen3. Thus, you may change the Cflags entry in /usr/lib/x86_64-linux-gnu/pkgconfig/ompl.pc to:
     Cflags: -I${includedir} -I/usr/include/eigen3
    

    Blender Missing COLLADA Support

Some versions of Blender in Ubuntu and Debian do not support the COLLADA format for 3D meshes, so you may need to install Blender manually in this case (see https://www.blender.org/download/).

SBCL and CFFI incompatibility

The versions of SBCL in some distributions (e.g., SBCL 1.2.4 in Debian Jessie) do not work with new versions of CFFI. In these cases, you will need to install SBCL manually (see http://www.sbcl.org/platform-table.html).

Mac OS X

Install Dependencies via Homebrew

If you use the Homebrew package manager, you can install the dependencies as follows:

  • Install packages:
      brew tap homebrew/science
      brew install autoconf-archive openblas maxima sdl2 libtool ompl
      brew install https://raw.github.com/dartsim/homebrew-dart/master/fcl.rb
    
  • Set the CPPFLAGS variable so that the cblas.h header can be found:
      CPPFLAGS=-I/usr/local/opt/openblas/include
    
  • Proceed to Blender Setup below

Install Dependencies via MacPorts

If you use the MacPorts package manager, you can install the dependencies as follows:

  • Install packages:
      sudo port install \
           coreutils wget \
           autoconf-archive maxima f2c flex sbcl  \
           OpenBLAS \
           libsdl2 povray ffmpeg  \
           fcl ompl \
           glpk
    
  • Ensure that LD_LIBRARY_PATH or DYLD_LIBRARY_PATH contains the MacPorts lib directory (typically /opt/local/lib).
      echo $LD_LIBRARY_PATH
      echo $DYLD_LIBRARY_PATH
    
    If /opt/local/lib does not appear in either LD_LIBRARY_PATH or DYLD_LIBRARY_PATH, do the following (and consider adding it to your shell startup script).
      export LD_LIBRARY_PATH="/opt/local/lib:$LD_LIBRARY_PATH"
    
  • Ensure that autoconf can find the MacPorts-installed header and library files by editing the config.site file under your preferred installation prefix (default: /usr/local)
      vi /usr/local/share/config.site
    
    Ensure that the CPPFLAGS and LDFLAGS variables in config.site contain the MacPorts directories.
      CPPFLAGS="-I/opt/local/include"
      LDFLAGS="-L/opt/local/lib"
    
  • Proceed to Blender Setup below

Blender Setup

  • When installing blender on Mac OS X, you may need to create a wrapper script. If you copy the blender binaries to /usr/local/blender-2, then then
      touch /usr/local/bin/blender
      chmod a+x /usr/local/bin/blender
      vi /usr/local/bin/blender
    
    and add the following:
      #!/bin/sh
    
      exec /usr/local/blender-2/blender.app/Contents/MacOS/blender $@
    
    Now proceed to Quicklisp Setup below.

Quicklisp Setup

Finally, install Quicklisp manually, if desired for ray tracing and robot model compilation.

wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \
     --eval '(quicklisp-quickstart:install)' \
     --eval '(ql:add-to-init-file)' \
     --eval '(quit)'

Build Overview

  1. If you have obtained amino from the git repo, you need to initialize the git submodules and autotools build scripts.
     git submodule init && git submodule update && autoreconf -i
    
    This step is not necessary when you have obtained a distribution tarball which already contains the submodule source tree and autoconf-generated configure script.
  2. Configure for your system. To see optional features which may be enabled or disabled, run:
     ./configure --help
    
    Then run configure (adding any flags you may need for your system):
     ./configure
    
  3. Build:
     make
    
  4. Install:
     sudo make install
    
    If you need to later uninstall amino, use the conventional make uninstall command.

Demo Programs

The Amino distribution includes a number of demo programs. Several of these demos use URDF files which must be obtained separately.

Obtain URDF Files

  • If you already have an existing ROS installation, you can install the baxter_description ROS package, for example on ROS Indigo:
       sudo apt-get install ros-indigo-baxter-description
       export ROS_PACKAGE_PATH=/opt/ros/indigo/share
    
  • An existing ROS installation is not necessary, however, and you can install only the baxter URDF and meshes:
       cd ..
       git clone https://github.com/RethinkRobotics/baxter_common
       export ROS_PACKAGE_PATH=`pwd`/baxter_common
       cd amino
    

    Build Demos

  1. (Re)-configure to enable demos:
     ./configure --enable-demos --enable-demo-baxter
    
    Note: On Mac OS X, due to dynamic loading issues, it may be necessary to first build and install amino, and the re-configure and re-build amino with the demos enabled. Under GNU/Linux, it is generally possible to enable the demos during the initial build and without installing Amino beforehand.
  2. Build:
     make
    

    Run Demos

Many demo programs may be built. The following comand will list the built demos:

find ./demo -type f -executable \
    -not  -name '*.so' \
    -not -path '*.libs*'

The simple-scenefile demo will lauch the Viewer GUI with a simple scene compiled from a scene file.

./demo/simple-rx/simple-scenefile

Several demos using the Baxter model show various features. Each can be invoked without arguments.

  • ./demo/urdf/baxter/baxter-view displays the baxter via dynamic loading
  • ./demo/urdf/baxter/baxter-simple via static linking
  • ./demo/urdf/baxter/baxter-wksp moves the baxter arm in workspace
  • ./demo/urdf/baxter/baxter-collision performs collision checking
  • ./demo/urdf/baxter/baxter-ik computes an inverse kinematics solution
  • ./demo/urdf/baxter/baxter-ompl computes a motion plan
  • ./demo/urdf/baxter/baxter-ompl-workspace computes a motion plan to a workspace goal
  • ./demo/urdf/baxter/baxter-ompl-sequence computes a sequence of motion plans

Tests

Basic Tests

  • To run the unit tests:
      make check
    
  • To create and check the distribution tarball:
      make distcheck
    

    Docker Tests

Several Docker files are included in ./script/docker which enable building testing amino in a container with a clean OS installation. These Dockerfiles are also used in the Continuous Integration tests.

  • To build a docker image using the script/docker/ubuntu-xenial file:
      ./script/docker-build.sh ubuntu-xenial
    
  • To build amino and run tests using docker image built from script/docker/ubuntu-xenial file:
      ./script/docker-check.sh ubuntu-xenial
    

Common Errors

  • ./configure fails with when checking for cffi-grovel.
    • Older versions of SBCL (around 1.2.4) have issues with current versions of CFFI. Please try installing a recent SBCL (>1.3.4).
  • I get error messages about missing .obj files or Blender being unable to convert a .dae to Wavefront OBJ.
    • A: We use Blender to convert various mesh formats to Wavefront OBJ, then import the OBJ file. The Blender binaries in the Debian and Ubuntu repositories (as of Jessie and Trusty) are not built with COLLADA (.dae) support. You can download the prebuilt binaries from http://www.blender.org/ which do support COLLADA.
  • When I try to compile a URDF file, I receive the error "aarx.core: not found".
    • A: URDF support in amino is only built if the necessary dependencies are installed. Please ensure that you have SBCL, Quicklisp, and Sycamore installed and rebuild amino if necessary.
  • When building aarx.core, I get an enormous stack trace, starting with:
      Unable to load any of the alternatives:
        ("libamino_planning.so" (:DEFAULT "libamino_planning"))
    
    • This means that SBCL is unable to load the planning library or one of its dependecies, such as OMPL. Typically, this means your linker is not configured properly.

      Sometimes, you just need to run ldconfig or sudo ldconfig to update the linker cache.

      If ldconfig doesn't work, you can set the LD_LIBRARY_PATH variable. First, find the location of libompl.so, e.g., by calling locate libompl.so. Then, add the directory to your LD_LIBRARY_PATH variable. Most commonly, this will mean adding one of the following lines to your shell startup files (e.g., .bashrc):

          export LD_LIBRARY_PATH="/usr/local/lib/:$LD_LIBRARY_PATH"
      
          export LD_LIBRARY_PATH="/usr/local/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH"
      
  • When building aarx.core, I get an enormous stack trace, starting with something like:
      Unable to load foreign library (LIBAMINO-PLANNING).
      Error opening shared object "libamino-planning.so":
      /home/user/workspace/amino/.libs/libamino-planning.so: undefined symbol: _ZNK4ompl4base20RealVectorStateSpace10getMeasureEv.
    
    • This error may occur when you have multiple incompatible versions of OMPL installed and the compiler finds one version while the runtime linker finds a different version. Either modify your header (-I) and linking ($LD_LIBRARY_PATH) paths, or remove the additional versions of OMPL.

Generic Instructions for GNU Autotools

Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc.

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without warranty of any kind.

Basic Installation

Briefly, the shell commands ./configure; make; make install should configure, build, and install this package. The following more-detailed instructions are generic; see the README file for instructions specific to this package. Some packages provide this INSTALL file but do not implement all of the features documented below. The lack of an optional feature in a given package is not necessarily a bug. More recommendations for GNU packages can be found in *note Makefile Conventions: (standards)Makefile Conventions.

The configure shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a Makefile in each directory of the package. It may also create one or more .h files containing system-dependent definitions. Finally, it creates a shell script config.status that you can run in the future to recreate the current configuration, and a file config.log containing compiler output (useful mainly for debugging configure).

It can also use an optional file (typically called config.cache and enabled with --cache-file=config.cache or simply -C) that saves the results of its tests to speed up reconfiguring. Caching is disabled by default to prevent problems with accidental use of stale cache files.

If you need to do unusual things to compile the package, please try to figure out how configure could check whether to do them, and mail diffs or instructions to the address given in the README so they can be considered for the next release. If you are using the cache, and at some point config.cache contains results you don't want to keep, you may remove or edit it.

The file configure.ac (or configure.in) is used to create configure by a program called autoconf. You need configure.ac if you want to change it or regenerate configure using a newer version of autoconf.

The simplest way to compile this package is:

  1. cd to the directory containing the package's source code and type ./configure to configure the package for your system.

    Running configure might take a while. While running, it prints some messages telling which features it is checking for.

  2. Type make to compile the package.
  3. Optionally, type make check to run any self-tests that come with the package, generally using the just-built uninstalled binaries.
  4. Type make install to install the programs and any data files and documentation. When installing into a prefix owned by root, it is recommended that the package be configured and built as a regular user, and only the make install phase executed with root privileges.
  5. Optionally, type make installcheck to repeat any self-tests, but this time using the binaries in their final installed location. This target does not install anything. Running this target as a regular user, particularly if the prior make install required root privileges, verifies that the installation completed correctly.
  6. You can remove the program binaries and object files from the source code directory by typing make clean. To also remove the files that configure created (so you can compile the package for a different kind of computer), type make distclean. There is also a make maintainer-clean target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution.
  7. Often, you can also type make uninstall to remove the installed files again. In practice, not all packages have tested that uninstallation works correctly, even though it is required by the GNU Coding Standards.
  8. Some packages, particularly those that use Automake, provide make distcheck, which can by used by developers to test that all other targets like make install and make uninstall work correctly. This target is generally not run by end users.

Compilers and Options

Some systems require unusual options for compilation or linking that the configure script does not know about. Run ./configure --help for details on some of the pertinent environment variables.

You can give configure initial values for configuration parameters by setting variables in the command line or in the environment. Here is an example:

 ./configure CC=c99 CFLAGS=-g LIBS=-lposix

*Note Defining Variables::, for more details.

Compiling For Multiple Architectures

You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you can use GNU make. cd to the directory where you want the object files and executables to go and run the configure script. configure automatically checks for the source code in the directory that configure is in and in ... This is known as a "VPATH" build.

With a non-GNU make, it is safer to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use make distclean before reconfiguring for another architecture.

On MacOS X 10.5 and later systems, you can create libraries and executables that work on multiple system types–known as "fat" or "universal" binaries–by specifying multiple -arch options to the compiler but only a single -arch option to the preprocessor. Like this:

 ./configure CC="gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
             CXX="g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
             CPP="gcc -E" CXXCPP="g++ -E"

This is not guaranteed to produce working output in all cases, you may have to build one architecture at a time and combine the results using the lipo tool if you have problems.

Installation Names

By default, make install installs the package's commands under /usr/local/bin, include files under /usr/local/include, etc. You can specify an installation prefix other than /usr/local by giving configure the option --prefix=PREFIX, where PREFIX must be an absolute file name.

You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you pass the option --exec-prefix=PREFIX to configure, the package uses PREFIX as the prefix for installing programs and libraries. Documentation and other data files still use the regular prefix.

In addition, if you use an unusual directory layout you can give options like --bindir=DIR to specify different values for particular kinds of files. Run configure --help for a list of the directories you can set and what kinds of files go in them. In general, the default for these options is expressed in terms of ${prefix}, so that specifying just --prefix will affect all of the other directory specifications that were not explicitly provided.

The most portable way to affect installation locations is to pass the correct locations to configure; however, many packages provide one or both of the following shortcuts of passing variable assignments to the make install command line to change installation locations without having to reconfigure or recompile.

The first method involves providing an override variable for each affected directory. For example, make install prefix=/alternate/directory will choose an alternate location for all directory configuration variables that were expressed in terms of ${prefix}. Any directories that were specified during configure, but not in terms of ${prefix}, must each be overridden at install time for the entire installation to be relocated. The approach of makefile variable overrides for each directory variable is required by the GNU Coding Standards, and ideally causes no recompilation. However, some platforms have known limitations with the semantics of shared libraries that end up requiring recompilation when using this method, particularly noticeable in packages that use GNU Libtool.

The second method involves providing the DESTDIR variable. For example, make install DESTDIR=/alternate/directory will prepend /alternate/directory before all installation names. The approach of DESTDIR overrides is not required by the GNU Coding Standards, and does not work on platforms that have drive letters. On the other hand, it does better at avoiding recompilation issues, and works well even when some directory options were not specified in terms of ${prefix} at configure time.

Optional Features

If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving configure the option --program-prefix=PREFIX or --program-suffix=SUFFIX.

Some packages pay attention to --enable-FEATURE options to configure, where FEATURE indicates an optional part of the package. They may also pay attention to --with-PACKAGE options, where PACKAGE is something like gnu-as or x (for the X Window System). The README should mention any --enable- and --with- options that the package recognizes.

For packages that use the X Window System, configure can usually find the X include and library files automatically, but if it doesn't, you can use the configure options --x-includes=DIR and --x-libraries=DIR to specify their locations.

Some packages offer the ability to configure how verbose the execution of make will be. For these packages, running ./configure --enable-silent-rules sets the default to minimal output, which can be overridden with make V=1; while running ./configure --disable-silent-rules sets the default to verbose, which can be overridden with make V=0.

Particular systems

On HP-UX, the default C compiler is not ANSI C compatible. If GNU CC is not installed, it is recommended to use the following options in order to use an ANSI C compiler:

 ./configure CC="cc -Ae -D_XOPEN_SOURCE=500"

and if that doesn't work, install pre-built binaries of GCC for HP-UX.

On OSF/1 a.k.a. Tru64, some versions of the default C compiler cannot parse its <wchar.h> header file. The option -nodtk can be used as a workaround. If GNU CC is not installed, it is therefore recommended to try

 ./configure CC="cc"

and if that doesn't work, try

 ./configure CC="cc -nodtk"

On Solaris, don't put /usr/ucb early in your PATH. This directory contains several dysfunctional programs; working variants of these programs are available in /usr/bin. So, if you need /usr/ucb in your PATH, put it after /usr/bin.

On Haiku, software installed for all users goes in /boot/common, not /usr/local. It is recommended to use the following options:

 ./configure --prefix=/boot/common

Specifying the System Type

There may be some features configure cannot figure out automatically, but needs to determine by the type of machine the package will run on. Usually, assuming the package is built to be run on the same architectures, configure can figure that out, but if it prints a message saying it cannot guess the machine type, give it the --build=TYPE option. TYPE can either be a short name for the system type, such as sun4, or a canonical name which has the form:

 CPU-COMPANY-SYSTEM

where SYSTEM can have one of these forms:

 OS
 KERNEL-OS

See the file config.sub for the possible values of each field. If config.sub isn't included in this package, then this package doesn't need to know the machine type.

If you are building compiler tools for cross-compiling, you should use the option --target=TYPE to select the type of system they will produce code for.

If you want to use a cross compiler, that generates code for a platform different from the build platform, you should specify the "host" platform (i.e., that on which the generated programs will eventually be run) with --host=TYPE.

Sharing Defaults

If you want to set default values for configure scripts to share, you can create a site shell script called config.site that gives default values for variables like CC, cache_file, and prefix. configure looks for PREFIX/share/config.site if it exists, then PREFIX/etc/config.site if it exists. Or, you can set the CONFIG_SITE environment variable to the location of the site script. A warning: not all configure scripts look for a site script.

Defining Variables

Variables not defined in a site shell script can be set in the environment passed to configure. However, some packages may run configure again during the build, and the customized values of these variables may be lost. In order to avoid this problem, you should set them in the configure command line, using VAR=value. For example:

 ./configure CC=/usr/local2/bin/gcc

causes the specified gcc to be used as the C compiler (unless it is overridden in the site shell script).

Unfortunately, this technique does not work for CONFIG_SHELL due to an Autoconf bug. Until the bug is fixed you can use this workaround:

 CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash

configure Invocation

configure recognizes the following options to control how it operates.

--help
-h
     Print a summary of all of the options to `configure`, and exit.

--help=short
--help=recursive
     Print a summary of the options unique to this package's
     `configure`, and exit.  The `short` variant lists options used
     only in the top level, while the `recursive` variant lists options
     also present in any nested packages.

--version
-V
     Print the version of Autoconf used to generate the `configure`
     script, and exit.

--cache-file=FILE
     Enable the cache: use and save the results of the tests in FILE,
     traditionally `config.cache`.  FILE defaults to `/dev/null` to
     disable caching.

--config-cache
-C
     Alias for `--cache-file=config.cache`.

--quiet
--silent
-q
     Do not print messages saying which checks are being made.  To
     suppress all normal output, redirect it to `/dev/null` (any error
     messages will still be shown).

--srcdir=DIR
     Look for the package's source code in directory DIR.  Usually
     `configure` can determine that directory automatically.

--prefix=DIR
     Use DIR as the installation prefix.  *note Installation Names::
     for more details, including other options available for fine-tuning
     the installation locations.

--no-create
-n
     Run the configure checks, but stop before creating any output
     files.

`configure` also accepts some other, not widely useful, options.  Run
`configure --help` for more details.