XR-Robotics with Pico 4 Ultra: VR Teleoperation Setup from Headset to Robot Simulation
4th March 2026
I’ve been setting up XR-Robotics with a Pico 4 Ultra headset to teleoperate robot arms in simulation — and eventually collect demonstration data for imitation learning. The setup spans a PC running Ubuntu, a Python teleoperation stack, and a VR headset acting as the human interface. Here’s the complete step-by-step guide.
What You’re Building
The system has three parts that talk to each other over WiFi:
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Pico 4 Ultra │ WiFi │ PC Service │ local │ Python Teleop │
│ (VR headset) │ ──────▶ │ (bridge app) │ ──────▶ │ (MuJoCo sim) │
│ │ │ │ │ │
│ Tracks hands │ │ Relays hand │ │ Moves simulated │
│ + controllers │ │ pose data │ │ robot arms │
└──────────────────┘ └──────────────────┘ └──────────────────┘
Your hand movements in VR get streamed to the PC, which drives a physics simulation of robot arms. You can also record episodes for training VLA models.
Phase 1: Prepare Your PC (Ubuntu 22.04)
You need a dedicated PC running Ubuntu 22.04. This is the “robot brain” that the headset talks to.
Step 1.1 — Install System Basics
sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential cmake git curl wget python3-pip
Step 1.2 — Install Conda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
# Restart your terminal after installation
Step 1.3 — Install Qt 6.6.3
The PC Service requires Qt 6.6.3. Download the Qt Online Installer, run it, and choose Custom Installation. Select Qt 6.6.3 with these components:
gcc_64compiler- Qt Quick 3D
- Qt 5 Compatibility Module
- Qt Multimedia
- CMake + Ninja (under build tools)
Phase 2: Install XRoboToolkit PC Service
This is the bridge between your Pico headset and the robot simulation. It receives hand tracking data from the headset and forwards it to your Python scripts.
Option A — Install from Pre-built Release (Easier)
# Check the releases page for a .deb package:
# https://github.com/XR-Robotics/XRoboToolkit-PC-Service/releases
sudo dpkg -i xrobotoolkit-pc-service_*.deb
Option B — Build from Source
git clone https://github.com/XR-Robotics/XRoboToolkit-PC-Service.git
cd XRoboToolkit-PC-Service
# Edit RoboticsService/qt-gcc.sh — update the Qt path
# to match your installation (e.g., ~/Qt/6.6.3/gcc_64)
bash RoboticsService/qt-gcc.sh
Run the PC Service
# If installed via .deb:
/opt/apps/roboticsservice/runService.sh
# If built from source, check the build output directory for the executable
Leave this running — it must be active whenever you use the headset.
Phase 3: Set Up the Teleoperation Python Environment
Step 3.1 — Clone the Repo
git clone https://github.com/XR-Robotics/XRoboToolkit-Teleop-Sample-Python.git
cd XRoboToolkit-Teleop-Sample-Python
Step 3.2 — Create Conda Environment
bash setup_conda.sh --conda xrobo
conda activate xrobo
bash setup_conda.sh --install
This installs all dependencies: MuJoCo, Placo, xrobotoolkit_sdk, and everything else the teleoperation scripts need.
Phase 4: Set Up Pico 4 Ultra
Step 4.1 — Enable Developer Mode
- On the Pico 4 Ultra, go to Settings > General > Developer
- Enable Developer Mode and USB Debugging
Step 4.2 — Install ADB on Your PC
sudo apt install -y android-tools-adb
Step 4.3 — Install the XRoboToolkit App
Option A — Pre-built APK (if available):
# Check XRoboToolkit-Unity-Client releases for a pre-built APK
adb install XRoboToolkit.apk
Option B — Build from Unity:
- Install Unity (check the XRoboToolkit-Unity-Client repo for the required version)
- Clone the repository
- Open the project in Unity
- Build for Android (Pico platform)
- Install the resulting APK via ADB
Phase 5: Connect & Run Your First Simulation
Step 5.1 — Network Setup
Connect your PC and Pico 4 Ultra to the same WiFi network. Note your PC’s IP:
hostname -I
Step 5.2 — Start the PC Service
/opt/apps/roboticsservice/runService.sh
Step 5.3 — Launch a Simulation
In a new terminal:
cd XRoboToolkit-Teleop-Sample-Python
conda activate xrobo
python scripts/simulation/teleop_dual_ur5e_mujoco.py
Step 5.4 — Connect the Headset
- Put on the Pico 4 Ultra
- Launch the XRoboToolkit app
- A connection prompt appears — select your PC’s IP address
- Status should show “WORKING”
Step 5.5 — Start Teleoperating
| Control | Action |
|---|---|
| Hold grip buttons (L/R) | Activate arm control |
| Squeeze triggers | Control grippers (open/close) |
| Move your hands | Robot arms follow your movements |
Move your hands — the simulated robot follows in real time. The MuJoCo physics simulation handles collisions, gravity, and contact forces.
Phase 6: Data Collection (for AI Training)
Once you’re comfortable teleoperating, you can record demonstration episodes for imitation learning:
| Action | Control |
|---|---|
| Start recording | Press B button |
| Stop recording | Press B button again |
| Discard episode | Press A button |
Recorded episodes are saved as .pkl files that can be converted to LeRobot format for VLA training.
Teleoperation Session
├── episode_001.pkl (pick up cube — success)
├── episode_002.pkl (pick up cube — success)
├── episode_003.pkl (pick up cube — dropped, discard with A)
└── episode_004.pkl (place cube in bin — success)
↓
Convert to LeRobot format
↓
Fine-tune VLA model (GR00T, ACT, Diffusion Policy, etc.)
Available Simulations
These scripts run entirely in simulation — no physical robot needed:
| Script | What It Does |
|---|---|
teleop_dual_ur5e_mujoco.py | Two UR5e arms in MuJoCo physics sim |
teleop_shadow_hand_mujoco.py | Dexterous shadow hand in MuJoCo |
Start with the dual UR5e script — it’s the most straightforward way to verify your setup is working end-to-end.
Phase 7: Adding a ROS 2 Layer
The basic setup uses a direct Python SDK connection. But if you want to integrate with the broader robotics ecosystem — RViz visualization, MoveIt motion planning, rosbag recording, or multi-machine setups — you need ROS 2 in the loop.
Here’s the architecture with ROS 2 added:
┌─────────────────┐ WiFi/UDP ┌──────────────────────────────────┐
│ Pico 4 Ultra │◄──────────────────►│ XRoboToolkit PC Service │
│ (Unity Client) │ pose + video │ (runs on Ubuntu 22.04 PC) │
└─────────────────┘ └──────────┬───────────────────────┘
│ PXREARobotSDK (C++ lib)
┌──────────▼───────────────────────┐
│ picoxr ROS2 Node (publisher) │
│ Publishes: /xr_pose │
└──────────┬───────────────────────┘
│ xr_msgs/Custom
┌─────────────┼─────────────────┐
│ │ │
┌─────▼────┐ ┌────▼─────┐ ┌──────▼──────┐
│ MuJoCo │ │ Robot │ │ Your Custom │
│ Sim Node │ │ HW Node │ │ Nodes │
└──────────┘ └──────────┘ └─────────────┘
The Pico headset data flows through the PC Service, gets published as a ROS 2 topic, and any number of subscriber nodes can consume it — simulation, hardware drivers, logging, visualization, all at once.
Step 7.1 — Install ROS 2 Humble
If you followed the earlier ROS 2 installation guide, you already have this. Otherwise:
# Add ROS2 apt repository
sudo apt install software-properties-common
sudo add-apt-repository universe
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
-o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
http://packages.ros.org/ros2/ubuntu \
$(. /etc/os-release && echo $UBUNTU_CODENAME) main" | \
sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
# Install ROS2 Humble Desktop + dev tools
sudo apt update
sudo apt install -y ros-humble-desktop \
python3-colcon-common-extensions python3-rosdep ros-humble-tf2
# Source ROS2
echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
source ~/.bashrc
# Initialize rosdep
sudo rosdep init
rosdep update
Step 7.2 — Create the ROS 2 Workspace
mkdir -p ~/xrobo_ros_ws/src
cd ~/xrobo_ros_ws/src
# Clone the official XRoboToolkit ROS repo
git clone https://github.com/XR-Robotics/XRoboToolkit-Teleop-ROS.git
Step 7.3 — Build the XRoboToolkit ROS 2 Packages
The repo provides two core packages:
Package 1: xr_msgs — Custom message definitions that carry headset data through ROS 2:
# Custom.msg — the main message on /xr_pose topic
int64 timestamp_ns
int32 input
Head head
Controller left_controller
Controller right_controller
# Head.msg
float32[7] pose # x, y, z, rx, ry, rz, rw (position + quaternion)
int32 status # 3 = active, -1 = inactive
# Controller.msg
float32 axis_x # joystick X
float32 axis_y # joystick Y
bool axis_click # joystick pressed
float32 gripper # grip button (0.0 - 1.0)
float32 trigger # trigger button (0.0 - 1.0)
bool primary_button # A button
bool secondary_button # B button
bool menu_button
float32[7] pose # x, y, z, rx, ry, rz, rw
int32 status # 3 = active
Package 2: picoxr — The XR publisher node. Connects to XRoboToolkit PC Service via PXREARobotSDK and publishes Pico headset data to ROS 2.
Build both:
cd ~/xrobo_ros_ws
# Make sure the PC Service SDK is installed at /opt/apps/roboticsservice/SDK
# (installed when you set up XRoboToolkit PC Service)
colcon build --packages-select xr_msgs picoxr
source install/setup.bash
Step 7.4 — Create a MuJoCo Simulation Package
cd ~/xrobo_ros_ws/src
ros2 pkg create --build-type ament_python xrobo_mujoco \
--dependencies rclpy xr_msgs std_msgs sensor_msgs geometry_msgs
Create the MuJoCo subscriber node at xrobo_mujoco/mujoco_sim_node.py:
#!/usr/bin/env python3
"""
ROS2 Node: MuJoCo Simulation
Subscribes to /xr_pose from Pico 4 Ultra and drives a simulated UR5e.
Publishes joint states and end-effector pose.
"""
import rclpy
from rclpy.node import Node
from xr_msgs.msg import Custom
from sensor_msgs.msg import JointState
from geometry_msgs.msg import PoseStamped
import mujoco
import numpy as np
import os
class MujocoSimNode(Node):
def __init__(self):
super().__init__('mujoco_sim_node')
# --- Parameters (configurable at launch) ---
self.declare_parameter('model_xml', 'scene_double.xml')
self.declare_parameter('step_size', 0.5)
self.declare_parameter('damping', 0.15)
self.declare_parameter('publish_rate', 100.0) # Hz
model_xml = self.get_parameter('model_xml').value
self.step_size = self.get_parameter('step_size').value
self.damping = self.get_parameter('damping').value
rate = self.get_parameter('publish_rate').value
# --- MuJoCo Setup ---
assets_path = os.path.join(
os.path.dirname(__file__), '..', 'assets', 'universal_robots_ur5e'
)
self.model = mujoco.MjModel.from_xml_path(
os.path.join(assets_path, model_xml)
)
self.data = mujoco.MjData(self.model)
mujoco.mj_resetData(self.model, self.data)
mujoco.mj_resetDataKeyframe(
self.model, self.data, self.model.key("home").id
)
# End effector IDs
self.left_ee_id = self.model.body('wrist_3_link').id
self.right_ee_id = self.model.body('wrist_3_link__2').id
self.mocap_left = self.model.body("target").mocapid[0]
self.mocap_right = self.model.body("target__2").mocapid[0]
# Goal positions
self.goal_left = np.array([0.49, 0.63, 0.59])
self.goal_right = np.array([0.49, -0.43, 0.59])
self.rest_left = np.array([0.5, 0.5, 0.5])
self.rest_right = np.array([0.5, -0.5, 0.5])
self.origin_left = None
self.origin_right = None
# --- ROS2 Subscribers ---
self.xr_sub = self.create_subscription(
Custom, '/xr_pose', self.xr_callback, 10
)
# --- ROS2 Publishers ---
self.joint_pub = self.create_publisher(JointState, '/joint_states', 10)
self.left_ee_pub = self.create_publisher(
PoseStamped, '/left_ee_pose', 10
)
self.right_ee_pub = self.create_publisher(
PoseStamped, '/right_ee_pose', 10
)
# --- Publish timer ---
self.create_timer(1.0 / rate, self.publish_state)
self.get_logger().info('MuJoCo sim node started')
def xr_callback(self, msg: Custom):
"""Receive XR controller poses and update simulation goals."""
if msg.left_controller.status == 3:
pose = np.array(msg.left_controller.pose)
if self.origin_left is None:
self.origin_left = pose.copy()
delta = pose[:3] - self.origin_left[:3]
self.goal_left = self.rest_left + np.array(
[-delta[2], -delta[0], delta[1]]
)
if msg.right_controller.status == 3:
pose = np.array(msg.right_controller.pose)
if self.origin_right is None:
self.origin_right = pose.copy()
delta = pose[:3] - self.origin_right[:3]
self.goal_right = self.rest_right + np.array(
[-delta[2], -delta[0], delta[1]]
)
def publish_state(self):
"""Run physics step and publish joint states + EE poses."""
self.data.mocap_pos[self.mocap_left] = self.goal_left
self.data.mocap_pos[self.mocap_right] = self.goal_right
mujoco.mj_step(self.model, self.data)
# Publish joint states
js = JointState()
js.header.stamp = self.get_clock().now().to_msg()
js.name = [
mujoco.mj_id2name(self.model, mujoco.mjtObj.mjOBJ_JOINT, i)
for i in range(self.model.njnt)
]
js.position = self.data.qpos.tolist()
js.velocity = self.data.qvel.tolist()
self.joint_pub.publish(js)
# Publish EE poses
for ee_id, pub in [
(self.left_ee_id, self.left_ee_pub),
(self.right_ee_id, self.right_ee_pub)
]:
pose_msg = PoseStamped()
pose_msg.header.stamp = js.header.stamp
pose_msg.header.frame_id = 'world'
pos = self.data.body(ee_id).xpos
pose_msg.pose.position.x = float(pos[0])
pose_msg.pose.position.y = float(pos[1])
pose_msg.pose.position.z = float(pos[2])
pub.publish(pose_msg)
def main(args=None):
rclpy.init(args=args)
node = MujocoSimNode()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
Step 7.5 — Add a ROS 2 Service (Reset Simulation)
Create a service definition at xrobo_mujoco/srv/ResetSim.srv:
# Request
bool reset_to_home # Reset robot to home position
float32 damping # New damping value (0 = keep current)
---
# Response
bool success
string message
Add the service server to your node:
from xrobo_mujoco.srv import ResetSim
# In __init__:
self.reset_srv = self.create_service(
ResetSim, '/reset_simulation', self.reset_callback
)
def reset_callback(self, request, response):
"""Service: Reset simulation state."""
if request.reset_to_home:
mujoco.mj_resetData(self.model, self.data)
mujoco.mj_resetDataKeyframe(
self.model, self.data, self.model.key("home").id
)
self.origin_left = None
self.origin_right = None
self.goal_left = self.rest_left.copy()
self.goal_right = self.rest_right.copy()
if request.damping > 0:
self.damping = request.damping
response.success = True
response.message = 'Simulation reset complete'
self.get_logger().info(response.message)
return response
Call from CLI:
ros2 service call /reset_simulation xrobo_mujoco/srv/ResetSim \
"{reset_to_home: true, damping: 0.15}"
Step 7.6 — Add a ROS 2 Action (Move to Target with Feedback)
Create the action definition at xrobo_mujoco/action/MoveToTarget.action:
# Goal
float64[3] left_target # [x, y, z]
float64[3] right_target # [x, y, z]
float64 tolerance # position error threshold
---
# Result
bool success
float64 final_error_left
float64 final_error_right
---
# Feedback
float64 error_left
float64 error_right
float64 progress_pct # 0-100
Add the action server:
from rclpy.action import ActionServer
from xrobo_mujoco.action import MoveToTarget
# In __init__:
self.move_action = ActionServer(
self, MoveToTarget, '/move_to_target', self.move_execute_callback
)
async def move_execute_callback(self, goal_handle):
"""Action: Move arms to target with progress feedback."""
self.get_logger().info('Moving to target...')
target_left = np.array(goal_handle.request.left_target)
target_right = np.array(goal_handle.request.right_target)
tol = goal_handle.request.tolerance
self.goal_left = target_left
self.goal_right = target_right
feedback = MoveToTarget.Feedback()
max_steps = 1000
for step in range(max_steps):
err_l = np.linalg.norm(
target_left - self.data.body(self.left_ee_id).xpos
)
err_r = np.linalg.norm(
target_right - self.data.body(self.right_ee_id).xpos
)
feedback.error_left = float(err_l)
feedback.error_right = float(err_r)
feedback.progress_pct = min(100.0, (step / max_steps) * 100)
goal_handle.publish_feedback(feedback)
if err_l < tol and err_r < tol:
break
await rclpy.get_default_context().executor.sleep(0.01)
goal_handle.succeed()
result = MoveToTarget.Result()
result.success = (err_l < tol and err_r < tol)
result.final_error_left = float(err_l)
result.final_error_right = float(err_r)
return result
Call from CLI:
ros2 action send_goal /move_to_target xrobo_mujoco/action/MoveToTarget \
"{left_target: [0.4, 0.5, 0.6], right_target: [0.4, -0.5, 0.6], tolerance: 0.01}" \
--feedback
Step 7.7 — Configure DDS for Low-Latency Teleoperation
ROS 2 Humble uses Fast DDS by default. For XR teleoperation you want low-latency pose streaming with reliable command delivery.
Create ~/xrobo_ros_ws/fastdds_profile.xml:
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
<profiles>
<!-- Low-latency for XR teleoperation -->
<participant profile_name="xr_teleop_participant"
is_default_profile="true">
<rtps>
<builtin>
<discovery_config>
<leaseDuration><sec>5</sec></leaseDuration>
</discovery_config>
</builtin>
</rtps>
</participant>
<!-- Reliable transport for commands/services -->
<data_writer profile_name="reliable_writer">
<qos>
<reliability><kind>RELIABLE</kind></reliability>
<durability><kind>TRANSIENT_LOCAL</kind></durability>
</qos>
</data_writer>
<!-- Best-effort for high-frequency pose data -->
<data_writer profile_name="realtime_writer">
<qos>
<reliability><kind>BEST_EFFORT</kind></reliability>
<publishMode><kind>ASYNCHRONOUS</kind></publishMode>
</qos>
</data_writer>
</profiles>
</dds>
Set environment variables in ~/.bashrc:
export RMW_IMPLEMENTATION=rmw_fastrtps_cpp
export FASTRTPS_DEFAULT_PROFILES_FILE=~/xrobo_ros_ws/fastdds_profile.xml
# For multi-machine setup (headset PC ↔ robot PC):
export ROS_DOMAIN_ID=42 # Same on all machines
Step 7.8 — Create a Launch File
Create xrobo_mujoco/launch/teleop_sim.launch.py:
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
return LaunchDescription([
# 1. XR Publisher — reads from Pico via PC Service SDK
Node(
package='picoxr',
executable='talker',
name='xr_publisher',
output='screen',
),
# 2. MuJoCo Simulation — subscribes to /xr_pose
Node(
package='xrobo_mujoco',
executable='mujoco_sim_node',
name='mujoco_sim',
output='screen',
parameters=[{
'model_xml': 'scene_double.xml',
'step_size': 0.5,
'damping': 0.15,
'publish_rate': 100.0,
}],
),
])
Step 7.9 — Build & Run Everything
cd ~/xrobo_ros_ws
colcon build
source install/setup.bash
# Launch the full stack
ros2 launch xrobo_mujoco teleop_sim.launch.py
Step 7.10 — Inspect the ROS 2 Graph
# Active nodes
ros2 node list
# /xr_publisher, /mujoco_sim
# Active topics
ros2 topic list
# /xr_pose, /joint_states, /left_ee_pose, /right_ee_pose
# Monitor XR data from Pico at ~90Hz
ros2 topic echo /xr_pose
# Check topic publish rate
ros2 topic hz /xr_pose
# List services and actions
ros2 service list # /reset_simulation
ros2 action list # /move_to_target
# Visualize the full node graph
ros2 run rqt_graph rqt_graph
# DDS health check
ros2 doctor --report
Complete ROS 2 Topic/Service/Action Map
| Type | Name | Message Type | Flow |
|---|---|---|---|
| Topic | /xr_pose | xr_msgs/Custom | picoxr → mujoco_sim |
| Topic | /joint_states | sensor_msgs/JointState | mujoco_sim → any |
| Topic | /left_ee_pose | geometry_msgs/PoseStamped | mujoco_sim → any |
| Topic | /right_ee_pose | geometry_msgs/PoseStamped | mujoco_sim → any |
| Service | /reset_simulation | xrobo_mujoco/ResetSim | Reset sim state |
| Action | /move_to_target | xrobo_mujoco/MoveToTarget | Move arms to pose |
DDS middleware: Fast DDS (eProsima) — BEST_EFFORT for high-frequency pose data, RELIABLE for commands and services. Domain ID: 42.
Troubleshooting
Headset Can’t Connect to PC
Make sure the PC Service is running before launching the headset app. Verify both devices are on the same WiFi network. Check your firewall isn’t blocking the connection:
# Check if the service port is listening
ss -tlnp | grep robotics
Camera Not Working (Video See-Through)
VST camera access on Pico 4 Ultra requires enterprise approval from PICO. This is not something you can enable yourself — you need to contact their enterprise team for access.
Simulation Doesn’t Move
Make sure you’re holding the grip buttons to activate arm control. Without grips held, hand tracking data is ignored to prevent accidental movements.
Qt Build Errors (Phase 2)
Double-check that your Qt path in qt-gcc.sh matches your actual installation. The path should point to the gcc_64 directory:
# Typical path:
~/Qt/6.6.3/gcc_64
# Verify it exists:
ls ~/Qt/6.6.3/gcc_64/bin/qmake
The Full Picture
Pico 4 Ultra (VR Headset)
│ hand tracking + controller input
│
▼ (WiFi)
PC Service (bridge)
│ relays pose data
│
▼ (local)
Python Teleop Script
│ drives MuJoCo simulation
│ records demonstration episodes
│
▼ (offline)
LeRobot Dataset
│ convert .pkl → training format
│
▼
VLA Fine-tuning
│ train on your demonstrations
│
▼
Autonomous Robot
deploys learned policy on real hardware
The entire pipeline — from hand movements in VR to a trained robot policy — runs through this stack. VR teleoperation gives you high-quality demonstrations without needing a physical robot for data collection, and the same simulation environment you train in can be used for sim-to-real transfer when you’re ready to deploy on hardware.
More recent articles
- OpenUSD: Advanced Patterns and Common Gotchas. - 28th March 2026
- OpenUSD Mastery: From Composition to Pipeline — A SO-101 Arm Journey - 25th March 2026
- Learning OpenUSD — From Curious Questions to Real Understanding - 19th March 2026