1. Mixed Reality


Content

1. Wait – is there more than one reality?
2. Mixed reality for desktop apps in Processing




René Magritte, La Condition Humaine, 1935

Filippo Brunelleschi (1377-1446) with his mirror device 15th century




1. Wait – is there more than one reality?

Mixed Reality Scale after Milgram et. al., 1994



Augmented Reality

Im ersten Teil der Lehrveranstaltung haben wir im Abschnitt ›Computer Vision‹ gesehen, wie Marker (›Fiducials‹) von Processing über eine Kamera erkannt und durch Objekte ersetzt werden können. Ein avanciertes Beispiel im zweidimensionalen Raum stellt der Reactable dar. Solch eine Erweiterung der Wirklichkeit durch virtuelle Elemente, eine Verschränkung beider Ebenen, wird meist als Augmented Reality bezeichnet.

Beispiel Augmented Reality: Reactable

Beispiel Augmented Reality: Pool Live Aid

Während sich die virtuellen Elemente beim Reactable sich auf die Darstellung auf einer Oberfläche beschränken und als User Interface gesehen werden kann, wurde es in jüngster Vergangenheit populär, Augmented Reality auf den dreidimensionalen Raum zu beziehen.

Beispiel Augmented Reality: Pokémon Go

AR Overkill: Link



Augmented Virtuality



Virtuality (Virtual Reality)



Weird Reality

And then there`s also Weird Reality: Link.


Immersion

Immersion bedeutet das ›Eintauchen‹ in eine virtuelle oder augmented reality.
Um das Gefühl des Eintauchens in eine alternative Realität zu Empfinden zu Verstärken werden Anzeigegeräte wie Headsets und Sensoren (bspw. 3D-Gestenerkennung) als Interface verwendet. Merkmal der Immersion ist, dass das Interface in den Hintergrund tritt bzw. nicht mehr Wahrgenommen wird und so ein Gefühl der Unmittelbarkeit entsteht.

Beispiel: Link

DIY-Kombi: Leap Motion & Google Cardboard


Anwendungen

Headsets

3D Sensoren

Tracking


Example: Tracking system in the iem cube / Graz. Controlling spacialized sound with a tracked object.



Google Cardboard



2. Mixed reality for desktop apps in Processing


"First, you should realise that every (3D) sketch you can run in Processing, can be used in an augmented reality context. The only thing that AR changes is the perspective/camera. So every sketch which holds 3D shapes could work."(creative applications network)

Using nyARToolkit-Library with markers

In the section ›Computer Vision‹ of the first part of the course some strategies to track objects, colors and markers in a cam feed were introduced. We used libraries as openCV and boofCV. The 3D techniques we learned in the last section can now be applied to design generative objects that are connected to markers in augmented reality.
Another library that comes handy is NyARToolkit. It provides the functionality of ARTOolkit for processing. A documentation can be found here (use Google Translate).

The most recent version of nyArToolkit-library can be donwloaded here: https://github.com/nyatla/NyARToolkit-for-Processing/releases

Some useful functions, the library provides are:

Using the nyARtoolkit with markers:

import processing.video.*;
import jp.nyatla.nyar4psg.*;		// import nyARToolkit library

Capture cam;				// declare object for cam
MultiMarker nya;			// declare object for nya-marker

void setup() {
  size(640,480,P3D);
  cam=new Capture(this,640,480);	// initialize cam
  					// set Multimarker
  nya=new MultiMarker(this,width,height,"data/camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  
  // Data for marker from image-file or generated marker-data
  nya.addARMarker(loadImage("data/hiro.png"),16,25,80);
  //nya.addARMarker("data/patt.hiro",80);
  cam.start();				// start cam feed
 }
 
 void draw(){
  if (cam.available() !=true) {	// if cam hasn`t been started yet, go to start
      return;
  }
  cam.read();				// read image from cam 
  nya.detect(cam);			// analyze cam image for markers
 }


import jp.nyatla.nyar4psg.*;
import jp.nyatla.nyar4psg.utils.*;

import processing.video.*;

Capture cam;
MultiMarker nya;

PVector pointer;            // A Vector that holds coordinates of mousepointer in 3D-space
PImage map;            // PImage for the map image
PShape maptable, stand, mapboard;    //PShapes for the virtual map stand

void setup() {
  size(640,480,P3D);
  //cam=new Capture(this,640,480);                // use internal cam
  cam = new Capture(this,  640, 480, "Live! Cam Sync HD VF0770"); // use external cam
  
  // setup Multimarker Object
  nya=new MultiMarker(this,width,height,"data/camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  nya.addARMarker("data/patt.hiro",80);  // define a marker image to use
  map = loadImage("worldmap_smaller.jpg");  // load map image from data folder
  cam.start();                           // start cam
  
  // Create a virtuel 3D-object that holds the map
  maptable = createShape(GROUP);
  stand = createShape(BOX, 300,200,100);
  stand.setFill(color(255, 255, 255, 100));;
  maptable.addChild(stand);
  mapboard = createShape(BOX, 300, -200, 1);
  mapboard.translate(0,0,55);
  mapboard.setFill(color(255, 255, 255, 255));
  mapboard.setTexture(map);
  maptable.addChild(mapboard);
}

void draw()
{
  if (cam.available() !=true) {   // if cam hasn`t been started yet, go to start
      return;
  }
  cam.read();        // read image from cam
  nya.detect(cam);   // analyze cam image for markers
  background(0);
  nya.drawBackground(cam);  // draw the cam image to the background of our sketch
  if((!nya.isExist(0))){  // if no marker was found in the feed of the cam 
    return;            // go back to start of draw loop
  }
  pointer=nya.screen2ObjectCoordSystem(0,mouseX,mouseY); //translate mouseposition to 3D-world
  rotateY(PI);
  nya.beginTransform(0);      // start drawing in 3D of marker (0)
    shape(maptable);          // draw the map holder containing the map
    stroke(100,100,0);        // set conditions to draw a mouse pointer in 3D
    translate(0,0,50);
    fill(200);
    ellipse((int)pointer.x,(int)pointer.y,20,20);
  nya.endTransform();        // end drawing in 3D of the marker
}


NyID Markers from ID 0 to 10. Made with Marker Generator

Marker Generator für NyARToolkit-Library.




import processing.video.*;
import jp.nyatla.nyar4psg.*;

Capture cam;
MultiMarker nya;
int amount = 10;
color[] colors = new color[amount];              // declare Array for colors
float[] rotation = new float[amount];            // declare Array for rotation
PShape figure;

void setup() {
  size(640,360,P3D);
  colorMode(HSB, 100);
  figure = loadShape("dummy_obj.obj");
  shapeMode(CENTER);
  //cam=new Capture(this,640,480);          // Internal Cam
  cam=new Capture(this,640,360, "Live! Cam Sync HD VF0770");
  nya=new MultiMarker(this,width,height,"data/camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  
  // Add several NyID Markers by this for-loop --------------------------
  for (int i=0; i < amount; i++){
    nya.addNyIdMarker(i,80);                      // add Marker
    colors[i] = color(i*2+75, 100,100, 40);      // add color for linked object
    rotation[i] = random(TWO_PI);              // add rotation for linked object
  }
  cam.start();
  lights();
}

void draw()
{
  if (cam.available() !=true) {
      return;
  }
  cam.read();
  nya.detect(cam);
  background(0);
  nya.drawBackground(cam);
  
  // Add transformations for all the markers in this for-loop --------------------------
  for (int i=0; i < amount; i++){
    if((nya.isExist(i))){            // test, if a marker is detected
      nya.beginTransform(i);          // start drawing in 3D of the detected marker
        figure.setFill(colors[i]);    // set color of object
        rotateX(PI/2);
        rotateY(rotation[i]);        // set rotation of object
        translate(100,80);          // center it on marker
        shape(figure, 0, 0);        // draw object
      nya.endTransform();          // end drawing in 3D of the marker
    }
  }  
}



import processing.video.*;
import jp.nyatla.nyar4psg.*;

Capture cam;
MultiMarker nya;

int amount = 11;

Blossom[] blossoms = new Blossom[amount];

float theta = 0.0;
int totalShapes = 8;
int totalPoints = 50;
float scaling = random(20, 40);

float[] pointX = new float[totalPoints];
float[] pointY = new float[totalPoints];
float[] pointZ = new float[totalPoints];

void setup() {
  size(640,360,P3D);
  colorMode(HSB);
  //cam=new Capture(this,640,480);          // Internal Cam
  cam=new Capture(this,640,360, "Live! Cam Sync HD VF0770");
  nya=new MultiMarker(this,width,height,"data/camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  for (int i=0; i < amount; i++){
    nya.addNyIdMarker(i,80);
    blossoms[i] = new Blossom();
    blossoms[i].setPoints();
  }
  cam.start();
  lights();
}

void draw()
{
  if (cam.available() !=true) {
      return;
  }
  cam.read();
  nya.detect(cam);
  nya.drawBackground(cam);

  for (int i=0; i < amount; i++){
    if((nya.isExist(i))){
      nya.beginTransform(i);
        stroke(0,100);
        strokeWeight(6);
        line(0,0,0,0,0,blossoms[i].getStemHeight());
        translate(0, 0, blossoms[i].getStemHeight());
        blossoms[i].rotateObject();
        
        rotateX(blossoms[i].getRotation());
        rotateY(blossoms[i].getRotation());
        blossoms[i].drawObject();
      nya.endTransform();
    }
  }
}




class Blossom {
  float theta = 0.0;
  float rotationSpeed = random(0.001, 0.02);
  float stemHeight = random(100,300);
  int totalPoints = 40;
  float scaling = random(20, 40);
  
  float[] pointX = new float[totalPoints];
  float[] pointY = new float[totalPoints];
  float[] pointZ = new float[totalPoints];
  
 Blossom(){
   
 }
  
 void setPoints(){
  for(int i = 0; i < totalPoints ; i ++){
    pointX[i] = randomGaussian()*scaling;
    pointY[i] = randomGaussian()*scaling;
    pointZ[i] = randomGaussian()*scaling;
  } 
 }
  
 void drawObject(){
  beginShape(TRIANGLES);
  for(int i = 0; i < totalPoints-2 ; i ++){
    fill(205+i, 200, 200, 150);
    strokeWeight(2);
    stroke(255,80);
    vertex(pointX[i], pointY[i], pointZ[i]);
    vertex(pointX[i+1], pointY[i+1], pointZ[i+1]);
    vertex(pointX[i+2], pointY[i+2], pointZ[i+2]);
  }
  endShape(); 
 }
 
 void rotateObject(){
    theta += rotationSpeed;
 }
 
 float getStemHeight(){
    return stemHeight;
 }
 
 float getRotation(){
    return theta;
 }  
}

Exercise: (30 Minutes)
Take your generative object from last lesson`s exercise and attach it to a marker to take it from virtual to augmented reality.

Optional (1): Can you write a class for your object that lets you attach generative instances to several markers?

Optional (2): Can you write a sketch that attaches the objects of all students of the course to different markers to curate an augmented reality exhibition?