Skip to content

Augmented Reality (AR)

Liveposter supports Augmented Reality (AR) experiences that overlay 3D models, videos, and images on physical posters when viewed through a mobile device camera.

Liveposter’s AR system lets you overlay 3D models, videos, and images on physical posters when viewed through a mobile camera. The system uses:

  • MindAR - Image tracking library for recognizing physical posters
  • A-Frame - 3D/VR framework for rendering AR content
  • WebXR - Works in modern mobile browsers without app installation

How it works:

  1. You add an ar property to your poster JSON configuration
  2. MindAR uses .mind target files to recognize your physical poster images
  3. When recognized, A-Frame renders your AR layers (3D models, videos, images) on top
  4. Users scan physical posters with their mobile browser - no app needed!

Add an ar object to your poster spec to enable AR:

{
"mode": "diaporama",
"images": [{ "src": "poster.jpg" }],
"ar": {
"enabled": true,
"targetFiles": ["poster.mind"],
"layers": [
{
"targetIndex": 0,
"type": "model",
"src": "model.gltf",
"position": { "x": 0, "y": 0, "z": 0 },
"scale": { "x": 1, "y": 1, "z": 1 }
}
]
}
}

Configuration properties:

  • enabled (boolean) - Turn AR on/off
  • targetFiles (array of strings) - Paths to .mind files for image recognition
  • layers (array of objects) - AR content to display (3D models, videos, images)
    • Each layer must reference a targetIndex (which .mind file to track)
    • Each layer has a type: "model", "video", or "image"
    • Position, rotation, and scale use A-Frame coordinate system

AR compilation requires additional dependencies. Install them only when you need AR features:

Terminal window
npm install @tensorflow/tfjs @msgpack/msgpack canvas mathjs ml-matrix svd-js tinyqueue @mediapipe/tasks-vision

Note: These dependencies are large (~200+ MB) and only needed for compiling .mind target files. Regular Liveposter animations work without them. This keeps the base package lightweight while allowing AR users to opt-in when needed.

Target files (.mind) contain visual feature data that MindAR uses to recognize your physical poster images.

Option A: Automatic Generation (Recommended)

Use the built-in CLI tool to automatically generate .mind files for all images in your poster specs:

Terminal window
# Generate targets for a single poster
npx liveposter ar-compile-targets poster.json
# Generate targets for a poster list
npx liveposter ar-compile-targets poster-list.json
# Force regenerate existing targets (overwrites all .mind files)
npx liveposter ar-compile-targets --force poster.json
# Preview what would be generated (doesn't compile)
npx liveposter ar-compile-targets --dry-run poster.json
# Verbose output with detailed progress
npx liveposter ar-compile-targets -v poster.json

What the tool does:

  • Finds images: Scans all JPG, JPEG, PNG, GIF, BMP, and WEBP files in your specs
  • Generates .mind files: Creates targets alongside original images (image.jpgimage.mind)
  • Smart caching: Automatically skips images that already have .mind files
  • Progress tracking: Shows [22/40] counter and real-time status for each file
  • Ignores: Remote URLs (http/https) and video files are automatically skipped

Command options:

  • -f, --force - Overwrite existing .mind files (useful when source images change)
  • -d, --dry-run - Preview what would be compiled without actually compiling
  • -v, --verbose - Show detailed progress with total image count
  • -h, --help - Display help message

Example output:

🎯 Starting AR target compilation...
[1/40] ⊘ Skipped: ./images/image-1.jpg (already exists)
[2/40] ⏳ Processing: ./images/image-2.jpg...
[2/40] ✓ Compiled: ./images/image-2.jpg → image-2.mind (849.4 KB)
[22/40] ⏳ Processing: ./images/poster.jpg... 60%
...
======================================================================
SUMMARY
======================================================================
✓ Compiled: 15
⊘ Skipped: 25 (already exist)
⏱️ Time: 42.3s
======================================================================

Status indicators:

  • Compiled - Successfully generated .mind file
  • Skipped - Already has .mind file (use --force to regenerate)
  • Processing - Currently compiling (shows progress %)
  • Failed - Compilation error (shows error message)

Option B: Manual Generation

  1. Visit the MindAR Image Compiler
  2. Upload your poster image (the physical image users will scan)
  3. Download the generated .mind file
  4. Place it in your project directory alongside the image

Tips for good tracking:

  • Use high-contrast images
  • Include distinctive features (corners, edges, patterns)
  • Avoid pure white or pure black images
  • Minimum recommended size: 400x400 pixels
  • Recommended: 480-1920px on longest side

How .mind file compilation works:

The target compilation process extracts and encodes visual features from your images:

  1. Feature Detection (0-50% progress)

    • Converts image to greyscale
    • Creates multi-scale image pyramids (using scale factor 2^(1/3))
    • Detects interest points (bright spots and dark spots) at each scale using TensorFlow.js
    • Applies hierarchical clustering to organize features
  2. Tracking Data Extraction (50-100% progress)

    • Processes smaller resolution images (256px and 128px)
    • Extracts optimized features for real-time AR tracking
    • Uses CPU-based TensorFlow.js in Node.js
  3. Binary Encoding

    • Compresses data using MessagePack format
    • Typical file size: 50-500 KB per image
    • Much smaller than original images
    • Contains feature points for matching and tracking (NOT pixel data)
  4. Result

    • A .mind file containing:
      • Image dimensions
      • Feature points for image recognition (matching)
      • Optimized features for real-time AR tracking
    • The original image is still needed for display - .mind files only contain abstract feature data

Important: The .mind file is used BY MindAR during AR sessions to recognize your physical poster, but you still need the original image for users to scan. Think of it as a “feature map” rather than a copy of the image.

Learn more about .mind files:

Add an ar property to your poster spec:

{
"mode": "diaporama",
"timing": { "duration": 3000 },
"images": [{ "src": "poster.jpg" }],
"ar": {
"enabled": true,
"targetFiles": ["poster.mind"],
"layers": [
{
"targetIndex": 0,
"type": "model",
"src": "bear.gltf",
"position": { "x": 0, "y": -0.25, "z": 0 },
"scale": { "x": 0.05, "y": 0.05, "z": 0.05 },
"animation": true
}
]
}
}
Terminal window
npx liveposter ar-dev my-poster.json

Open the URL on your mobile device and point the camera at your poster image.

Terminal window
npx liveposter ar-build my-poster.json --output dist-ar

Deploy the dist-ar folder to any static hosting (Vercel, Netlify, etc.).

Display GLTF/GLB 3D models:

{
"targetIndex": 0,
"type": "model",
"src": "model.gltf",
"position": { "x": 0, "y": 0, "z": 0 },
"rotation": { "x": 0, "y": 0, "z": 0 },
"scale": { "x": 1, "y": 1, "z": 1 },
"animation": true,
"opacity": 1
}

Supported formats: GLTF (.gltf, .glb)

Overlay video content:

{
"targetIndex": 0,
"type": "video",
"src": "video.mp4",
"position": { "x": 0, "y": 0, "z": 0 },
"scale": { "x": 1, "y": 1, "z": 1 },
"loop": true,
"autoplay": true,
"opacity": 1
}

Supported formats: MP4, WebM, OGG

Multi-format support:

{
"src": ["video.mp4", "video.webm"],
"type": "video"
}

Display 2D images:

{
"targetIndex": 0,
"type": "image",
"src": "overlay.png",
"position": { "x": 0, "y": 0, "z": 0 },
"scale": { "x": 1, "y": 1, "z": 1 },
"opacity": 1
}

Best format: PNG (supports transparency)

Liveposter uses the A-Frame 3D coordinate system for positioning AR content. All layers support position, rotation, and scale properties.

Units are in meters relative to the target (poster) center:

  • x: Left (negative) to Right (positive)
  • y: Down (negative) to Up (positive)
  • z: Away from camera (negative) to Toward camera (positive)
{
"position": { "x": 0.5, "y": 0.2, "z": -0.1 }
// 0.5m to the right, 0.2m up, 0.1m behind the poster
}

Degrees around each axis:

  • x: Pitch (rotate around horizontal axis)
  • y: Yaw (rotate around vertical axis)
  • z: Roll (rotate around depth axis)
{
"rotation": { "x": 0, "y": 45, "z": 0 }
// Rotated 45° around the vertical axis
}

Multipliers where 1 = original size:

{
"scale": { "x": 0.1, "y": 0.1, "z": 0.1 }
// Scaled to 10% of original size
}

For more details on the 3D coordinate system, see:

Track multiple posters simultaneously:

{
"ar": {
"enabled": true,
"targetFiles": [
"poster1.mind",
"poster2.mind",
"poster3.mind"
],
"layers": [
{
"targetIndex": 0,
"type": "model",
"src": "bear.gltf"
},
{
"targetIndex": 1,
"type": "model",
"src": "raccoon.gltf"
},
{
"targetIndex": 2,
"type": "video",
"src": "video.mp4"
}
]
}
}

Build AR experiences from multiple posters:

{
"version": "0.0.1",
"title": "AR Poster Collection",
"description": "Multiple AR-enabled posters",
"posters": [
{
"id": "poster-1",
"configPath": "./posters/poster1.json",
"enabled": true
},
{
"id": "poster-2",
"configPath": "./posters/poster2.json",
"enabled": true
}
]
}

Build the merged AR experience:

Terminal window
npx liveposter ar-build poster-list.json --output dist-ar

How it works:

  • All .mind files are combined into a single array
  • Layer targetIndex values are automatically adjusted to avoid conflicts
  • Assets are namespaced by poster ID
  • Single AR scene detects all enabled posters
Terminal window
npx liveposter ar-compile-targets [options] <input>

Automatically generates .mind target files for all images in your poster specs.

Options:

  • -v, --verbose - Enable verbose logging with progress bars
  • -f, --force - Overwrite existing .mind files
  • -d, --dry-run - Preview what would be compiled without actually compiling
  • -h, --help - Show help message

Examples:

Terminal window
# Generate targets for all images in a spec
npx liveposter ar-compile-targets poster.json
# Generate for multiple posters in a list
npx liveposter ar-compile-targets poster-list.json
# Force regenerate (overwrites existing .mind files)
npx liveposter ar-compile-targets --force poster.json
# See what would be generated
npx liveposter ar-compile-targets --dry-run poster-list.json
# Verbose output with progress
npx liveposter ar-compile-targets -v poster.json
Terminal window
npx liveposter ar-dev <spec-file.json>

Features:

  • Hot-reload on file changes
  • Watches spec file, assets, and target files
  • Auto-rebuilds AR scene
  • Runs on port 3100 by default (configurable with PORT_AR)

Custom port:

Terminal window
PORT_AR=8080 npx liveposter ar-dev my-poster.json
Terminal window
npx liveposter ar-build <input> [options]

Options:

  • --output, -o <dir> - Output directory (default: dist-ar)
  • --aframe-version <version> - A-Frame version (default: 1.6.0)
  • --mindar-version <version> - MindAR version (default: 1.2.5)

Examples:

Terminal window
# Build single poster
npx liveposter ar-build poster.json
# Custom output directory
npx liveposter ar-build poster.json --output my-ar-build
# Custom CDN versions
npx liveposter ar-build poster.json --aframe-version 1.6.0

Run both demo and AR servers simultaneously:

Terminal window
# Option 1: Use npm script
npm run test:dual:dev
# Option 2: Manual (separate terminals)
# Terminal 1 - Demo server (port 3000)
npx liveposter poster-list.json
# Terminal 2 - AR server (port 3100)
npx liveposter ar-dev poster-list.json

Why run both?

  • View regular poster animations on desktop
  • Test AR experience on mobile
  • Same poster list, different output formats
  • Use GLTF/GLB format (GLB is smaller, single-file)
  • Optimize polygon count (< 10k triangles recommended)
  • Compress textures (max 1024x1024 for mobile)
  • Use Draco compression for GLB files
  • Use H.264/MP4 for best compatibility
  • Keep resolution ≤ 1080p
  • Optimize bitrate (2-4 Mbps recommended)
  • Provide multiple formats (MP4, WebM) for fallback
  • Use PNG for transparency
  • Keep dimensions reasonable (≤ 2048px)
  • Optimize file size with tools like TinyPNG
  1. Limit simultaneous targets: 3-5 targets maximum for good performance
  2. Reduce layer count: Keep layers per target under 5
  3. Use CDN for assets: Faster loading, better caching
  4. Test on real devices: Desktop simulators don’t reflect mobile performance
  5. Enable HTTPS: Required for camera access in production

The AR build output is a static HTML file:

Terminal window
# Build
npx liveposter ar-build poster.json --output dist-ar
# Deploy to any static host
# - Netlify: Drag dist-ar folder
# - Vercel: Deploy dist-ar as root
# - GitHub Pages: Push dist-ar to gh-pages branch
# - AWS S3: Upload dist-ar contents
  • HTTPS required: Camera access requires secure context
  • CORS headers: If loading assets from different domain
  • Mobile-friendly: Viewport meta tag (included in template)

Problem: AR scene loads but camera doesn’t start Solutions:

  • Ensure you’re using HTTPS (required in production)
  • Check browser permissions for camera access
  • Try a different browser (Chrome/Safari recommended)

Problem: Camera works but poster isn’t recognized Solutions:

  • Ensure good lighting conditions
  • Hold camera steady at 30-50cm from poster
  • Verify .mind file matches the physical poster
  • Use high-contrast target images
  • Regenerate .mind file with better source image

Problem: AR content jumps or disappears Solutions:

  • Increase distance from poster
  • Improve lighting conditions
  • Use higher-quality target images
  • Avoid reflective surfaces on poster

Problem: Laggy or slow AR experience Solutions:

  • Reduce model complexity
  • Lower video resolution
  • Decrease number of targets/layers
  • Test on target devices (not just desktop)

Problem: 404 errors for assets Solutions:

  • Use relative paths in your JSON config
  • Verify assets are copied to dist-ar during build
  • Check browser console for specific errors
  • Ensure static server is serving from correct directory

See the AR examples in the repository:

  • packages/demo-server/public/ar-examples/ar-demo-1-model.json - Single 3D model
  • packages/demo-server/public/ar-examples/ar-demo-2-multi-model.json - Multiple targets
  • packages/demo-server/public/ar-examples/ar-demo-3-mixed-layers.json - Mixed layer types

Supported browsers:

  • iOS Safari 11+
  • Android Chrome 79+
  • Modern mobile browsers with camera access

No app installation required - everything runs in the browser.