- 1. Imager and Lens specifications and use - Default Lens and Wide FOV (Field Of View) Lens?
The M12 lens that comes with the camera is exactly matched to the IR illumination ring that also comes with the camera. If you are going to use passive markers or little reflective balls, then stick with the standard lens. The default lens has an Effective Focal Length of 3.4mm.
If you are planning on using active markers, IR LEDs, then the wider FOV lens could be of use. The FOV is 57 degrees horizontal and the focal length of the wide angle lens is 2.1mm.
The FLEX:C120 imager specifications from the manufacturer are 352x288 pixels, 1/7" 2.2mm × 1.8mm with a square pixel pitch of 6 micron. The functional image that is sent from the camera is slightly larger (355 × 288) than the area specified in the imager data sheet, the SDK returns the larger image area to reflect the additional available pixels.
- 2. Raw Material - What is this? Is this used to place markers in a scene?
This is a 1 inch wide, foot long strip of 3M reflective tape, with sticky fiber backing instead of the cheap sticker backing.
Purchase several feet of raw material and either place it directly as a marker, or wrap it around small balls, typically wood or plastic ones from a craft store, to make round reflective markers.
- 3. SmartNav Hat - Is this just a normal hat?
No. This is a normal baseball cap that has reflective material incorporated into the center of brim, and sewn into the rear adjustment strap. For more information, see the SmartNav accessories page.
- 4. It appears that you only supply smaller tripods. Is this an operational requirement or can taller higher tripods be used?
We actually do have some larger tripods for sale. I would recommend one of our lighting stands instead of a tripod, they are much cheaper and extend up to 10ft. They also require a ball head adapter. Feel free to use your own tripods, anything should work, just make sure that you have them on a very stable surface, like concrete, so there is no vibration transferred from the floor to the camera, causing it to move.
- 5. Do you sell active (LED) wireless markers?
The only markers we currently provide are passive retro-reflective markers which are illuminated by the camera-mounted LEDs. It is also possible to use infrared LEDs powered by a battery (active markers) with the OptiTrack cameras. In our online store, we sell IR LEDs which can be used in the creation of active markers.
- 1. How delicate are the SLIM series cameras? It appears they are attached to a green board without any case?
The FLEX:3 camera board is attached directly to a plastic mounting base that can then be mounted on a tripod. The SLIM:V100 and V120:SLIM camera boards have an optional enclosure that can be purchased. All three cameras are very robust.
- 2. Cabling - Can the USB cabling be used in conjunction with a Ethernet network via an USB Ethernet adapter, or would this degrade the quality of the data?
This could work, but we have not tried it. Typically, a user will put a few cameras together, say up to 4, connect them with a USB hub and then run the single hub cable to the computer. USB cables run only 5 meters(M), so you will need repeating extension cables if you have the hub more than 5M away, but we do not sell them directly.
- 3. Is it possible to maintain the full frame rate when using multiple cameras?
Yes, the cameras maintain the full frame rate with multiple cameras arrayed.
- 4. Is it possible to get grayscale video out of the camera?
Yes, the FLEX:C120, V100, V100:R2 and V120:SLIM cameras have the capability of transmitting the entire greyscale frame to the PC. The Camera SDK provides the ability to select this video mode and access the data.
V100:R2, V120:SLIM, Flex 13 and Ethernet cameras also support compressing the entire grayscale frame with MJPEG inside the camera. This significantly reduces the USB bandwidth required to transmit the image (usually 1/10th of non-compressed).
The FLEX:3 cameras only transmit 1 bit monochrome thresholded video; this means anything below the threshold is off and anything above it is on. The individual bright objects in the video frame get color coded based on their rank among the other tracked objects; the color is then superimposed over the video stream. The color information is not part of the actual video stream.
- 5. What frame rates and image sizes work for grayscale video mode?
Using V100, V100:R2, and V120 cameras, it is only possible to get uncompressed 640 × 480 grayscale at 100 FPS exposure rate with 50% frame decimation under some conditions. To achieve that, it is usually necessary to have only a single camera connected to your system's USB port and have no additional USB hubs plugged into the system. The down-sampled modes work under more diverse system configurations. They provide 320 × 240 and 160 × 120 grayscale at 100 FPS exposure with no frame decimation under most conditions.
V100:R2, V120:SLIM and Flex 13 cameras operating in MJPEG grayscale mode can typically transmit at or near their maximum MJPEG frame rate without issue on most systems and configurations.
The S250e MJPEG frame rate is adjustable from 30 FPS to 125 FPS.
FLEX:C120: It is usually possible to get 355 x 288 grayscale at 120 FPS. Under heavier USB load with additional cameras it may be necessary to set the Frame decimation mode to 50% or greater.
Additionally, using the windowing feature to reduce the horizontal size of the image can improve the ability to transfer grayscale video. Some USB chipsets provide greater grayscale throughput than others.
- 6. At what frame rate can the images be read from the cameras?
The 1 bit thresholded image and/or tracked marker coordinate data can be extracted in real time at the full frame rate from each camera. If you wish to sample at a slower rate, you may discard data.
- 7. Do the OptiTrack cameras work with active (LED) and passive (retroreflective) markers?
The cameras are compatible with both active and passive markers. Active markers work best when using LEDs that produce output in the 850nm wavelength.
- 8. What is the field of view of the cameras?
The field of view for the default M12 lens can be found on the specifications page for each camera. It is possible to use different M12 lenses to change the field of view.
- 9. Is the Camera SDK included with the purchase of the OptiTrack hardware?
The Camera SDK license is included when you purchase the cameras; there is no additional charge. You can also download the SDK from our website to review it before purchasing. The SDK provides an API for interfacing with the OptiTrack cameras and the tracking data which they produce.
Our Tracking Tools software provides advanced 3D and 6DoF tracking capabilities and may be purchased separately.
- 10. What is the pinout for the V120:SLIM and SLIM:V100 I2C, SYNC, LED header?
V120:SLIM (9-pin header) pin-out settings:
For more information, view the V120:SLIM technical drawings, which identify the Micro-T pin locations.
SLIM:V100 (8-pin header) pin-out settings:
- I2C SCL
- I2C SDA
- LED OUT
- SYNC OUT
- SYNC IN
Be careful to draw no more than 50ma from the 3.3V supply.
- 11. Which camera models can be used together in the same system?
V100 V100:R2 Flex 13 S250e Prime 41 V100 • • V100:R2 • • Flex 13 • S250e • • Prime 41 • •
V100:R2 and FLEX:V100 cameras can be part of the same sync chain.
- 12. Are there any issues with plugging an OptiTrack camera directly into a PC?
Using a 5m USB cable from an OptiTrack camera may result in a "Device not recognized" or "Unknown device" error on some computer systems due to USB limitations. Using a shorter USB cable will often resolve the issue. When plugging 5m USB cables directly into a computer, avoiding ports on the front of the tower will reduce the likelihood of cable-related errors.
Technical note: USB ports on the back of computers are usually connected directly to the motherboard while ports on the front are often connected by a cable (which may be of low quality). This extra cable length, when combined with a 5m cable, can exceed the maximum USB signaling distance.
- 13. What is the difference between Wired Sync and OptiSync?
Wired sync provides camera-to-camera sync using an extra set of cables in a daisy chain arrangement in addition to the USB cables. Wired sync is available for FLEX:V100, SLIM:V100, V100:R2 and V120:SLIM cameras. See FAQ entry 2-17 for information about which cameras can be used together in the same sync chain.
OptiSync is NaturalPoint's custom synchronization system, which sends and receives sync signals over the USB cable. No extra sync cable is required. OptiSync is only available when using V100:R2 or Flex 13 cameras connected to OptiHubs.
- 14. Which sync modes and camera combinations are supported by the OptiHub?
Wired Sync OptiSync Flex 13 cameras only • V120 cameras only •1 V100:R2 cameras only • • Mixed V100 / V100:R2 cameras • V100 cameras only •
Wired sync with an OptiHub in the chain requires the master OptiHub to be at the start of sync-chain.
- For more information about using wired sync between V120 cameras and OptiHubs, click here.
- 15. Which camera types can be mixed together in the same Wired Sync chain?
V100 cameras can be used in the same Wired Sync chain with V100:R2 cameras. All other cameras (C120, V120:SLIM) cannot be mixed with other cameras in the same Wired Sync chain.
Flex 13 cameras only support OptiSync and do not support Wired Sync. V100 and V120:SLIM cameras only support Wired Sync and do not support OptiSync.
- 16. Why aren't the cameras sending frames with the OptiHub in external trigger mode?
When operating in External Sync mode the OptiHub blue LED indicates the external trigger status. If the blue led is not blinking, that means the OptiHub is not getting a valid external trigger signal so the cameras will be stalled. This is the intended behavior, so it is important to make sure your external trigger signal is functioning properly.
If needed you can investigate further by putting a scope to the Ext Sync-Out jack when the problem occurs. If you are using a video adapter sync out as your source, you should be seeing a 50% square-wave at half the frequency of your video adapter's screen refresh rate.
When not in Shutter Goggles Synchronization mode, the OptiHub free-runs even when the Ext Sync-In signal is missing, which is why the blue led blinks then.
- 17. Can multiple camera groups be used in the Tracking Tools with custom synchronization?
No, only one camera group can be defined in Tracking Tools when using custom synchronization.
- 18. Can an OptiHub be used as the master in a V120:SLIM sync chain?
Yes, when using Tracking Tools or the Camera SDK. When using an OptiHub as the master for a V120:SLIM Wired Sync chain with Tracking Tools, it is necessary to set the frame rate to 120 Hz in the Custom Synchronization settings. If this is not done the chain will operate at 100 Hz which will also alter the exposure behavior.
- 19. How many sync cables are required for different camera configurations?
- V100/R2 with Wired Sync: one less than the number of V100/R2 cameras
- V120:SLIM with Wired Sync: one less than the number of V120:SLIM cameras
- OptiHubs + V100:R2 or Flex 13 with OptiSync: one less than the number of OptiHubs
- OptiHubs + V100/R2 with Wired Sync: the same as the number of V100/R2 cameras
- OptiHubs + V120:SLIM with Wired Sync: the same as the number of V120:SLIM cameras
Additional wiring information can also be found in the OptiHub Quick Start Guide.
- 20. Can an OptiHub be used with the Camera SDK?
The Camera SDK provides complete support for the OptiHub, including features such as OptiSync (sync over USB) and external sync-in.
- 21. What is the Intensity setting and what do its values mean?
The intensity setting controls the IR LEDs on the camera.
The LEDs can be operated either in i. Continuous, or ii. Strobe Mode. Intensity = 0-7 runs the LEDs in Continuous Mode. 0 turns the LEDs completely off, and 7 turns them completely on. PWM is applied for 1-6 (1 = min duty-cycle, 6 = max duty-cycle).
Intensity = 8-15 runs the LEDs in Strobe Mode, meaning that the LED is turned on ONLY during the Exposure period (i.e. shutter-open). When the imager is exposing, PWM may be optionally applied to the LEDs, with 8 = min duty-cycle, and 15 = max duty-cycle (i.e. fully-on while strobing).
- 22. Can Ethernet and USB cameras be mixed in the same system?
No, Ethernet and USB cameras cannot be mixed. Only one type may be used on a system at a time.
- 23. What PoE Ethernet switches are suitable for use with OptiTrack PoE Ethernet cameras?
Ethernet cameras require PoE or PoE+ Gigabit (1000 Mbit/s) Ethernet switches. Standard PoE switches must provide a full 15.4 watts to every port simultaneously and PoE+ switches must provide a full 25 watts to every port simultaneously.
Most PoE and PoE+ switches are not suitable because they only provide 15.4(PoE) or 25(PoE+) watts to a subset of their ports and downgrade the remaining ports to 8 or fewer watts. The Ethernet cameras will not function properly when insufficient power is available to them.
In order to cope with the volume of camera traffic and accommodate future upgrades, the switch must support Gigabit (1000 Mbit/s) for every port. Connecting multiple PoE switches in a star topology to a non-PoE Gigabit uplink switch is recommended for larger camera counts.
PoE MidSpan devices are power injectors and are not switches. Ethernet cameras require switches with full 15.4(PoE) or 25(PoE+) watts of PoE power per-port.
- 24. What exactly does the exposure setting mean?
Exposure is the setting that controls the amount of time the shutter is open, exposing the imager to light. In general, the longer the exposure, the more light is let in, and the lighter the picture gets. Exposure is measured differently, depending on the camera. V100 and V120 series cameras measure exposure in individual scanlines, while Ethernet and Flex 13 cameras measure exposure in microseconds. When camera LEDs operate in strobe mode, there is no benefit to using an exposure duration longer than the maximum strobe time per frame. Keeping the exposure shorter will also help to reduce smear for moving markers.
- V100 series: 20.55 µs per scanline, range 1-480. Max strobe time: 100
- V120 series: 17.33 µs per scanline, range 1-480. Max strobe time: 100
- Flex 13 series (120 FPS): 1 µs per exposure
- S250 series (125 FPS): 1 µs per exposure, range 10-7800. Max strobe time: 2693
- S250 series (250 FPS): 1 µs per exposure, range 10-3800. Max strobe time: 1346
- 25. What exactly does the threshold setting mean?
Threshold is the setting that requires a marker to be at a minimum brightness before the camera will consider it a potential valid marker. Threshold can be set between 1 and 255. A lower threshold will result in dimmer objects being checked for validity as a marker. Threshold should not be changed from its default setting of 200 unless circumstances warrant the need to (e.g. excessively high false markers or markers not being detected on a 2d camera view).
- 26. What are the available IR LED modes for FLEX:V100R2 and Flex 13 cameras based on the OptiHub they are connected to?
- Flex 13
- OptiHub 1 : no IR LEDs enabled
- OptiHub 2 : high-power IR LED mode
- Generic USB Hub : standard IR LED mode
- OptiHub 1 : standard & high-power IR LED mode
- OptiHub 2 : standard & high-power IR LED mode
- Flex 13
3. Compatibility and Data Formats
- 1. Are Linux drivers available?
We do not have any new information about the availability of Linux support at the moment, though we will share that information with our users if any becomes available.
- 2. What marker data formats are available?
The Camera SDK provides detected object data as a real-time stream of 2D imager coordinates. It provides a basic framework for recording this information and playing it back at a later time.
Tracking Tools provides a real-time stream of 3D marker and 6DoF rigid body coordinates, and the option to export recorded data to CSV files.
ARENA provides real-time 3D marker and skeleton data, along with BVH file output.
- 3. What coordinate formats are available for the marker position data?
The information available for markers in the Camera SDK is the sub-pixel weighted center X, Y position and area of the marker on the imager in pixels.
Marker data in 3D physical coordinates is available when using Tracking Tools.
- 4. What other FEM-CAD and Graphic applications (e.g. Matlab, Amira, 3D Studio Max, Ideas, Abacus, etc.) are compatible with the OptiTrack?
ARENA and Expression provide BVH/C3D file export, which is compatible with many common 3D rendering and animation packages (Poser, 3D Studio Max, MotionBuilder, etc). ARENA and Expression also provide real-time streaming using our NatNet transport. Streaming plugins are available for MotionBuilder, 3D Studio Max, and DAZ 3D. Our free NatNet SDK can be used by customers who wish to write their own plugins or applications to utilize the real-time streamed motion capture data.
Tracking Tools supports real-time streaming of 3D point cloud and rigid body tracking data over several industry-standard streaming transports. Our custom NatNet (with free SDK) is available, along with VRPN and Trackd.
- 5. Which Shutter Glasses/Stereo-Vision Systems are compatible with OptiTrack systems?
When used with an OptiHub, the following systems are compatible:
- Stereographics CrystalEyes
- NuVision 60GX
- NuVision APG6000
- 6. Is the Flex 13 camera compatible with an OptiHub v1?
When connected to an OptiHub v1 a Flex 13 camera will transmit video data, however the IR LEDs will be disabled due to insufficient available power. In order to use the on-board IR illumination a Flex 13 camera must be connected to an OptiHub 2.
- 7. Is the OptiHub v1 compatible with the OptiHub 2?
Yes, OptiHub v1 devices can be used in the same sync chain as OptiHub 2 devices. The only difference between the two OptiHub versions is the maximum amount of power they can provide for cameras connected to them.
4. Tracking Capability
- 1. What is the maximum distance that markers can be tracked by OptiTrack cameras?
The range depends upon the size of the marker, camera model, and lens used. Typically, larger markers and smaller FOV lenses allow for greater tracking range. See the camera comparison table for more details.
- 2. How does the software cope with occlusion?
The Camera SDK does not provide any special handling of occlusion. It is a development platform on which customers can build their own applications that implement multi-camera tracking and handle occlusion.
For customers who wish to purchase this capability, Tracking Tools provides a ready-to-use tracking solution that utilizes multiple cameras to handle occlusion. As long as a marker is visible to at least two cameras, the software will attempt to track it. More cameras can be added to provide better coverage and reduce occlusion.
- 3. What is the recommended operating environment for using OptiTrack cameras?
The ideal environment would have no external sunlight and only using fluorescent light. This should prevent false objects.
- 4. How many 'objects' (i.e. markers) is the OptiTrack system capable of detecting?
The number of markers that the OptiTrack is capable of tracking depends on the size of the markers and the distance they are from the camera. At a distance of four feet, the FLEX:C120 can track at least 40 half-inch markers and the V100:R2 and V100 cameras can track at least 80 half-inch markers.
- 5. Is there any extra processing that occurs when using multiple cameras?
If the 2D marker data from multiple cameras does not need to be combined, then the Camera SDK does not require any additional l processing. If multi-camera 3D tracking in required, then Tracking Tools should be used. This software handles all additional processing required to track and combine the marker data.
- 6. When a new frame is acquired on a camera, what information is sent over the bus (e.g. just a notification, the entire image, just dot positions, etc.)?
In Grayscale Mode, all of the pixel data including intensity information is sent to the PC over the USB bus.
In MJPEG Grayscale Mode, a MJPEG compressed version of the pixel data is sent to the PC over the USB bus; it is then decompressed by the software back to a full grayscale image.
In Preprocessed Mode, a 1 bit thresholded image is transferred to the PC where the final marker positions are calculated.
In Preprocessed Object Mode, all of the calculations are done in the camera and only the final marker positions are sent to the PC.
- 7. How is the synchronization of multiple cameras handled?
The V100:R2, V120:SLIM, FLEX:V100 and FLEX:C120 cameras provide hardware-based synchronization; this allows them to expose frames at the exact same time. In order to take advantage of synchronization, the cameras must be connected to each other using Sync Cables (sold separately from cameras).
When using V100:R2 or Flex 13 cameras with OptiHubs, OptiSync is available. This provides sync-over-USB from the OptiHubs to each camera without the need for sync cables to connect the cameras.
- 8. How does the Vector feature select the objects to use for its calculation?
The Vector feature uses it's own tracking algorithm to identify which objects should be used for the calculation, and does not necessarily use the top three ranked objects from the general tracking results. The vector calculations assume that you are only tracking a vector clip1 with minimal noise in the background. It uses the size and positions of the 3 largest markers to find the points.
- This feature is no longer supported.
- 9. Does the Camera SDK track objects and extract coordinate information from multiple cameras?
The Camera SDK addresses multiple cameras individually but does not correlate the tracking information between them. It does not extract the common 3DOF coordinate position for objects which are visible to multiple cameras at the same time, it only provides imager-pixel coordinates for each object per-camera.
If multi-camera 3D tracking in required, then Tracking Tools should be used. This software handles all additional processing required to track and combine the marker data.
- 10. Is it possible to control the camera illumination?
Cameras with built-in IR LED rings have the ability to adjust the intensity brightness of their illumination output.
Cameras with strobed illumination mode can provide a short burst of illumination at the start of the frame. Currently only the V120:SLIM camera provides support for synchronizing external illumination sources with the frame exposure.
- 11. Is the coordinate acquisition in real time or the processing is done after the acquisition of the movement?
The acquisition and processing all occurs in real time. It is possible to use our SDK to access the data if you wish to write an application to record it for post-processing.
- 12. What angles and distances are in the vector calculation and tracking?
The X, Y, and Z values are distances in mm from the camera. The point used to calculate this position is a rough estimate of the point of rotation of a person's head. The position is calculated assuming the vector clip1 is placed on the brim of a baseball cap on the user's head. It is calculated by taking a normal to the plane formed by the vector clip. Human head rotation is a complex movement, so this point of rotation is a simplification of the problem.
The angles returned by the API are simply the angles formed by the vector plane. Typically they are used as relative measures where the users "center" themselves to the camera.
The algorithm for determining the position of the vector clip is proprietary.
- This feature is no longer supported.
- 13. Is OptiTrack good for motion capture like character animation, 3D modeling, dance, animated movies or games?
ARENA and Expression, provide full body and face capture solutions with BVH/C3D file export and real-time streaming of 3D marker and skeleton data. Tracking Tools provide real-time 3D point cloud and rigid body tracking with a powerful software API.
The Camera SDK only provides basic 2D tracking; it does not include support for 3D motion capture or tracking.
- 14. How do USB and Ethernet based systems compare?
USB Ethernet Ideal for Face Mocap • Body Mocap • • Rigid Body Tracking • • High-speed motions • Complex motions • Volume Sizes Small - Medium Small - Large Max Capture Volume 18' × 18'
5.5m × 5.5m
75' × 150'
22m × 45m
Tracking Range Short - Medium Medium - Long Max Camera Count 24 96 Cabling Range camera to hub/switch 16' / 5m 328' / 100m hub/switch to PC 49' / 15m
(with max of 2
328' / 100m
Note: Cat6 Ethernet cabling must be used (Cat5/Cat5e/etc not supported).
5. SDK Installer Usage and Distribution
- 1. Can OptiTrack runtime/components be repackaged into installers and distributed with applications based on the Camera SDK?
We request that OptiTrack runtime/components be delivered using the installers we provide; they should not be repackaged. This helps ensure that everything is installed properly and also gives the user has a chance to view the license which accompanies them. Most installer engines have the ability to invoke another installer (such as the Camera SDK one) during the installation process.
If you have a special distribution need, please contact us to discuss it in detail.
- 2. Can I redistribute the Camera SDK installer with my application that uses the SDK?
The preferred approach is to link to the Camera SDK installer hosted on the NaturalPoint web servers. However, if you would like to bundle our installer with your application then we can accommodate that. In order to get a formal endorsement for doing so, please contact firstname.lastname@example.org and provide contact information for yourself in addition to more information about your project.
MSI based installers for the SDK are not available at this time.
6. Tracking Tools (Point Cloud and Rigid Body)
- 1. Is there sorting and tracking of 3D points?
For individual 3D points Tracking Tools does not provide tracking of points between consecutive frames, it will only return information about the markers found in the current frame. There is also no guarantee markers will be delivered in the same order from one frame to the next (no sorting).
For 3D markers clustered into rigid bodies, Tracking Tools will identify and track the locations of the rigid body and its markers between frames.
- 2. Can the baseline OptiTrack API be used in the same application as the Point Cloud API?
It is not possible to use the OptiTrack API in the same application as the Tracking Tools API.
- 3. What output formats are available for the Rigid Body toolkit?
Tracking Tools includes an API which applications can use to capture and stream real-time data. The Rigid Body GUI tool can be used to capture and stream real-time data, or to record and save data captures to disk.
There are also three real-time network streaming formats available:
- TrackD support (Mechdyne's proprietary 6DoF data protocol)
- VRPN support (open source 6DoF data protocol)
- NatNet format (open source generic 6DoF data protocol from NaturalPoint)
- Marked on the Garry 11-0021-50-09L connector
- LVTTL-level control signal that asserts high when IRLEDs are turned on, coincident with camera exposure period
- LVTTL-level PC software-controlled auxiliary output control signal
- For NaturalPoint internal-use in future V120:SLIM IRLED Ring Board
- Up to 100mA available for use by user
- WiredSync External-Sync Input for camera group synchronization
- WiredSync External-Sync Output for camera group synchronization
- The square one at the bottom of the board