Showing posts with label Leap Motion. Show all posts
Showing posts with label Leap Motion. Show all posts

Thursday, January 8, 2015

Creating and Texturing JavaFX 3D Shapes

Hi there!

It's been a while since my last post, and it seems I've just said the same on my last one... but you know, many stuff in between, and this blog has a tradition of long posts, those that can't be delivered on a weekly/monthly basis. But if you're a regular reader (thank you!) you know how this goes.

This post is about my last developments in JavaFX 3D, after working in close collaboration with a bunch of incredible guys for the last months.

For those of you new to this blog, I've already got a few posts talking about JavaFX 3D. My last recent one about the Rubik's Cube: RubikFX: Solving the Rubik's Cube with JavaFX 3D,  and other about the Leap Motion controller: Leap Motion Controller and JavaFX: A new touch-less approach.

I'll cover in this post the following topics:
Before getting started, did I mention my article "Building castles in the Sky. Use JavaFX 3D to model historical treasures and more" has been published in the current issue of Java Magazine?

In a nutshell,  this article describes a multi model JavaFX 3D based application, developed for the virtual immersion in Cultural Heritage Buildings, through models created by Reverse Engineering with Photogrammetry Techniques. The 3D model of the Menéndez Pelayo Library in Santander, Spain, is used throughout the article as an example of a complex model.



You can find this application and, thanks to Óscar Cosido, a free model of the Library here.

Leap Motion Skeletal Tracking Model


Since my first post about Leap Motion, I've improved the 3D version after Leap Motion released their  version 2 that includes an skeletal tracking model. 

I haven't had the chance to blog about it, but this early video shows my initial work. You can see that the model now includes bones, so a more realistic hand can be built.


I demoed a more advanced version at one of my JavaOne talks with the incredibles James Weaver, Sean Phillips and Zoran Sevarac. Sadly, Jason Pollastrini couldn't make it, but he was part of the 3D-team.

 

If you are interested, all the code is available here. Go, fork it and play with it if you have a Leap Motion controller.

 Yes, we did have a great time there.


The session was great. In fact you can watch it now at Parleys.

We had even a special guest: John Yoon, a.k.a. @JavaFX3D


Skinning Meshes and Leap Motion

And then I met Alexander Kouznetsov.

It was during the Hackergarten 3D session, where Sven Reimers and I were hacking some JavaFX 3D stuff, when he showed up, laptop in backpack, ready for some hacking. There's no better trick than asking a real developer:  I bet you're not able to hack this... to get it done!

So the challenge was importing a rigged hand in JSON format to use a SkinningMesh in combination with the new Leap Motion skeletal tracking model. As the one and only John Yoon would show later in his talk:
"In order to animate a 3D model, you need a transform hierarchy to which the 3D geometry is attached. 
The general term for this part of the pipeline is “rigging” or “character setup”.
Rigging is the process of setting up your static 3D model for computer-generated animation, to make it animatable."
He was in charge of animating the Duke for the chess demo shown at the Keynote of JavaOne 2013. As shown in the above picture, this required a mesh, a list of joints, weights and transformations, binding the inner 'bones' with the surrounding external mesh, so when the former were moved the latter was deformed, creating the desired animation effect.

The SkinningMesh class in the 3DViewer project was initially designed for Maya, and we had a rigged hand in Three.js model in JSON format.

So out of the blue Alex built an importer, and managed to get the mesh of the hand by reverse engineering. Right after that he solved the rest of the components of the skinningMesh. The most important part was the binding of the transformations between joints.


        Affine[] bindTransforms = new Affine[nJoints];
        Affine bindGlobalTransform = new Affine();
        List<Joint> joints = new ArrayList<>(nJoints);
        List<Parent> jointForest = new ArrayList<>();
        
        for (int i = 0; i < nJoints; i++) {
            JsonObject bone = object.getJsonArray("bones").getJsonObject(i);
            Joint joint = new Joint();
            String name = bone.getString("name");
            joint.setId(name);
            JsonArray pos = bone.getJsonArray("pos");
            double x = pos.getJsonNumber(0).doubleValue();
            double y = pos.getJsonNumber(1).doubleValue();
            double z = pos.getJsonNumber(2).doubleValue();
            joint.t.setX(x);
            joint.t.setY(y);
            joint.t.setZ(z);
            bindTransforms[i] = new Affine();
            int parentIndex = bone.getInt("parent");
            if (parentIndex == -1) {
                jointForest.add(joint);
                bindTransforms[i] = new Affine(new Translate(-x, -y, -z));
            } else {
                Joint parent = joints.get(parentIndex);
                parent.getChildren().add(joint);
                bindTransforms[i] = new Affine(new Translate(
                        -x - parent.getLocalToSceneTransform().getTx(), 
                        -y - parent.getLocalToSceneTransform().getTy(), 
                        -z - parent.getLocalToSceneTransform().getTz()));
            }
            joints.add(joint);
            joint.getChildren().add(new Axes(0.02));
        }

This was the first animation with the model:

 

The axes are shown at every joint. Observe how easy is to deform a complex mesh just by rotating two joints:

       Timeline t = new Timeline(new KeyFrame(Duration.seconds(1), 
                new KeyValue(joints.get(5).rx.angleProperty(), 90),
                new KeyValue(joints.get(6).rx.angleProperty(), 90)));
        t.setCycleCount(Timeline.INDEFINITE);
        t.play(); 

With a working SkinningMesh, it was just time for adding the skeletal tracking model from Leap Motion. 

First, we needed to match Bones to joints, and then we just needed to apply the actual orientation of every bone to the corresponding joint transformation.


        listener = new LeapListener();
        listener.doneLeftProperty().addListener((ov,b,b1)->{
            if(b1){
                List<finger$gt; fingersLeft=listener.getFingersLeft();
                Platform.runLater(()->{
                    fingersLeft.stream()
                        .filter(finger -> finger.isValid())
                        .forEach(finger -> {
                            previousBone=null;
                            Stream.of(Bone.Type.values()).map(finger::bone)
                                .filter(bone -> bone.isValid() && bone.length()>0)
                                .forEach(bone -> {
                                    if(previousBone!=null){
                                        Joint joint = getJoint(false,finger,bone);
                                        Vector cross = bone.direction().cross(previousBone.direction());
                                        double angle = bone.direction().angleTo(previousBone.direction());
                                        joint.rx.setAngle(Math.toDegrees(angle));
                                        joint.rx.setAxis(new Point3D(cross.getX(),-cross.getY(),cross.getZ()));
                                    }
                                    previousBone=bone;
                            });
                    });
                    ((SkinningMesh)skinningLeft.getMesh()).update();
                });
            }
        });

The work was almost done! Back from JavaOne I had the time to finish the model, adding hand movements and drawing the joints:



This video sums up most of what we've accomplished:


If you are interested in this project, all the code is here. Feel free to clone or fork it. Pull requests will be very wellcome.

 TweetWallFX

One thing leads to another... And Johan Vos and Sven asked me to join them in a project to create a Tweet Wall with JavaFX 3D for Devoxx 2014. JavaFX 3D? I couldn't say no even if I wasn't attending!

Our first proposal (not the one Sven finally accomplished) was based on the F(X)yz library from Sean and Jason: a SkyBox as a container, with several tori inside, where tweets were rotating over them:


Needless to say, we used the great Twitter4J API for retrieving new tweets with the hashtag #Devoxx.

The first challenge here was figuring out how to render the tweets over each torus. The solution was based on the use of an snapshot of the tweet (rendered in a background scene) that would serve as the diffuse map image of the PhongMaterial assigned to the torus.

To second was creating a banner effect rotating the tweets over they tori. To avoid artifacts, a segmented torus was built on top of the first one,  cropping the faces of a regular torus, so the resulting mesh will be textured with the image.

This is our desired segmented torus. 



In the next section, we'll go into details of how we could accomplish this shape.

Creating new 3D shapes

Note to beginners: For an excelent introduction to  JavaFX 3D, have a look to the 3D chapters on these books: JavaFX 8 Introduction by Example an JavaFX 8 Pro: A Definitive Guide to Building Desktop, Mobile, and Embedded Java Clients.

To create this mesh in JavaFX 3D we use a TriangleMesh as a basis for our mesh, where we need to provide float arrays of vertices and texture coordinates and one int array of vertex and texture indices for defining every triangle face.

Since a torus can be constructed from a rectangle, by gluting both pairs of opposite edges together with no twists, we could use a 2D rectangular grid in a local system ($\theta$,$\phi$), and map every point with these equations:

\[X=(R+r \cos\phi) \cos\theta\\Z=(R+r \cos\phi) \sin\theta\\Y=r \sin\phi\]

So based on this grid (with colored borders and triangles for clarity):

 we could create this torus (observe how the four corners of the rectangle are joinned together in one single vertex):


Now if we want to segment the mesh, we can get rid of a few elements from the borders. From the red inner grid, we could have a segmented torus now:



Vertices coordinates

As we can see in the SegmentedTorusMesh class from the F(x)yz library, generating the vertices for the mesh is really easy, based in the above equations, the desired number of subdivisions (20 and 16 in the figures) and the number of elements cropped in both directions (4):

       
    private TriangleMesh createTorus(int subDivX, int subDivY, int crop, float R, float r){    
        TriangleMesh triangleMesh = new TriangleMesh();

        // Create points
        List<Point3D> listVertices = new ArrayList<>();
        float pointX, pointY, pointZ;
        for (int y = crop; y <= subDivY-crop; y++) {
            float dy = (float) y / subDivY;
            for (int x = crop; x <= subDivX-crop; x++) {
                float dx = (float) x / subDivX;
                if(crop>0 || (crop==0 && x<subDivX && y<subDivY)){
                    pointX = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.cos((-1d+2d*dx)*Math.PI));
                    pointZ = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.sin((-1d+2d*dx)*Math.PI));
                    pointY = (float) (r*Math.sin((-1d+2d*dy)*Math.PI));
                    listVertices.add(new Point3D(pointX, pointY, pointZ));
                }
            }
        }

Note that we have to convert this collection to a float array. Since there is no such thing as FloatStream, trying to use Java 8 streams, I asked a question at StackOverflow, and as result now we use a very handy FloatCollector to do the conversion:

        float[] floatVertices=listVertices.stream()
            .flatMapToDouble(p->new DoubleStream(p.x,p.y,p.z))
            .collect(()->new FloatCollector(listVertices.size()*3), FloatCollector::add, FloatCollector::join)
            .toArray();

        triangleMesh.getPoints().setAll(floatVertices);

In case anybody is wondering why we don't use plain float[], using collections instead of simple float arrays allow us to perform mesh coloring (as we'll see later), subdivisions, ray tracing,...using streams and, in many of these cases, parallel streams.

Well, in Jason's words: why TriangleMesh doesn't provide a format that incorporates the use of streams by default...??

Texture coordinates

In the same way, we can create the texture coordinates. We can use the same grid, but now mapping (u,v) coordinates, from (0.0,0.0) on the left top corner to (1.0,1.0) on the right bottom one.


We need extra points for the borders.

        int index=0;
        int width=subDivX-2*crop;
        int height=subDivY-2*crop;
        float[] textureCoords = new float[(width+1)*(height+1)*2];
        for (int v = 0; v <= height; v++) {
            float dv = (float) v / ((float)(height));
            for (int u = 0; u <= width; u++) {
                textureCoords[index] = (float) u /((float)(width));
                textureCoords[index + 1] = dv;
                index+=2;
            }
        }
        triangleMesh.getTexCoords().setAll(textureCoords);

Faces

Once we have defined the coordinates we need to create the faces. From JavaDoc:
The term face is used to indicate 3 set of interleaving points and texture coordinates that together represent the geometric topology of a single triangle.
One face is defined by 6 indices: p0, t0, p1, t1, p2, t2, where p0, p1 and p2 are indices into the points array, and t0, t1 and t2 are indices into the texture coordinates array.

For convenience, we'll use two splitted collections of points indices and texture indices.

Based on the above figures, we go triangle by triangle, selecting the three indices position in specific order. This is critical for the surface orientation. Also note that for vertices we reuse indices at the borders to avoid the formation of seams.

        List<Point3D> listFaces = new ArrayList<>();
        // Create vertices indices
        for (int y =crop; y<subDivY-crop; y++) {
            for (int x=crop; x<subDivX-crop; x++) {
                int p00 = (y-crop)*((crop>0)?numDivX:numDivX-1) + (x-crop);
                int p01 = p00 + 1;
                if(crop==0 && x==subDivX-1){
                    p01-=subDivX;
                }
                int p10 = p00 + ((crop>0)?numDivX:numDivX-1);
                if(crop==0 && y==subDivY-1){
                    p10-=subDivY*((crop>0)?numDivX:numDivX-1);
                }
                int p11 = p10 + 1;
                if(crop==0 && x==subDivX-1){
                    p11-=subDivX;
                }                
                listFaces.add(new Point3D(p00,p10,p11));                
                listFaces.add(new Point3D(p11,p01,p00));
            }
        }

        List<Point3D> listTextures = new ArrayList<>();
        // Create textures indices
        for (int y=crop; y<subDivY-crop; y++) {
            for (int x=crop; <subDivX-crop; x++) {
                int p00 = (y-crop) * numDivX + (x-crop);
                int p01 = p00 + 1;
                int p10 = p00 + numDivX;
                int p11 = p10 + 1;
                listTextures.add(new Point3D(p00,p10,p11));                
                listTextures.add(new Point3D(p11,p01,p00));
            }
        }
       
Though now we have to join them. The adventages of this approach will be shown later.

        // create faces
        AtomicInteger count=new AtomicInteger();
        int faces[] = return listFaces.stream()
            .map(f->{
                Point3D t=listTexture.get(count.getAndIncrement());
                int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z;
                int t0=(int)t.x; int t1=(int)t.y; int t2=(int)t.z;
                return IntStream.of(p0, t0, p1, t1, p2, t2);
            }).flatMapToInt(i->i).toArray();
        triangleMesh.getFaces().setAll(faces);
    
        // finally return mesh
        return triangleMesh;
    }

This picture shows how we create the first and last pairs of faces. Note the use of counterclockwise winding to define the front faces, so we have the normal of every surface pointing outwards (to the outside of the screen).


Finally, we can create our banner effect, adding two tori, both solid (DrawMode.FILL) and one of them segmented and textured with an image. This snippet shows the basics:

        SegmentedTorusMesh torus = new SegmentedTorusMesh(50, 42, 0, 500d, 300d); 
        PhongMaterial matTorus = new PhongMaterial(Color.FIREBRICK);
        torus.setMaterial(matTorus);
        
        SegmentedTorusMesh banner = new SegmentedTorusMesh(50, 42, 14, 500d, 300d); 
        PhongMaterial matBanner = new PhongMaterial();
        matBanner.setDiffuseMap(new Image(getClass().getResource("res/Duke3DprogressionSmall.jpg").toExternalForm()));
        banner.setMaterial(matBanner); 
     
        Rotate rotateY = new Rotate(0, 0, 0, 0, Rotate.Y_AXIS);
        torus.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY);
        banner.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY);
   
        Group group.getChildren().addAll(torus,banner);        
        Group sceneRoot = new Group(group);
        Scene scene = new Scene(sceneRoot, 600, 400, true, SceneAntialiasing.BALANCED);
        primaryStage.setTitle("F(X)yz - Segmented Torus");
        primaryStage.setScene(scene);
        primaryStage.show(); 

        final Timeline bannerEffect = new Timeline();
        bannerEffect.setCycleCount(Timeline.INDEFINITE);
        final KeyValue kv1 = new KeyValue(rotateY.angleProperty(), 360);
        final KeyFrame kf1 = new KeyFrame(Duration.millis(10000), kv1);
        bannerEffect.getKeyFrames().addAll(kf1);
        bannerEffect.play();

to get this animation working:

 

Playing with textures

The last section of this long post will show you how we can hack the textures from a TriangleMesh to display more advances images over the 3D shape. This will include:
  • Coloring meshes (vertices or faces) 
  • Creating contour plots
  • Using patterns
  • Animating textures
This work is inspired by a question from Álvaro Álvarez on StackOverflow, about coloring individual triangles or individual vertices from a mesh. The inmediate answer would be: no, you can't easily, since for one mesh there's one material with one diffuse color, and it's not possible to assing different materials to different triangles of the same mesh. You could create as many meshes and materials as colors, if this number were really small.

Using textures, was the only way, but for that, following the standard procedure, you will need to color precisely your texture image, to match each triangle with each color. 

In convex polihedra there's at least one net, a 2D arrangement of polygons that can be folded into the faces of the 3D shape. Based on an icosahedron (20 faces), we could use its net to color every face:



And then use the image as texture for the 3D shape:


This was my first answer, but I started thinking about using another approach. What if instead of the above colored net we could create on runtime a small image of colored rectangles, like this:


and trick the texture coordinates and texture indices to find their values in this image instead? Done! The result was this more neat picture:



(The required code to do this is in my answer, so I won't post it here). 

And going a little bit further, if we could create one palette image, with one color per pixel, we could also assign one color to each vertex, and the texture for the rest of the triangle will be interpolated by the scene graph! This was part of a second answer:


Color Palette

With this small class we can create small images with up to 1530 unique colors. The most important thing is they are correlative, so we'll have smooth contour-plots, and there won't be unwanted bumps when intermediate values are interpolated.



To generate on runtime this 40x40 image (2 KB) we just use this short snippet:

        Image imgPalette = new WritableImage(40, 40);
        PixelWriter pw = ((WritableImage)imgPalette).getPixelWriter();
        AtomicInteger count = new AtomicInteger();
        IntStream.range(0, 40).boxed()
                .forEach(y->IntStream.range(0, 40).boxed()
                        .forEach(x->pw.setColor(x, y, Color.hsb(count.getAndIncrement()/1600*360,1,1))));

With it, we can retrieve the texture coordinates for a given point from this image and update the texture coordinates on the mesh:

    public DoubleStream getTextureLocation(int iPoint){
        int y = iPoint/40; 
        int x = iPoint-40*y;
        return DoubleStream.of((((float)x)/40f),(((float)y)/40f));
    }

    public float[] getTexturePaletteArray(){
        return IntStream.range(0,colors).boxed()
            .flatMapToDouble(palette::getTextureLocation)
            .collect(()->new FloatCollector(2*colors), FloatCollector::add, FloatCollector::join)
            .toArray();
    }

    mesh.getTexCoords().setAll(getTexturePaletteArray());

Density Maps

Half of the work is done. The other half consists in assigning a color to every vertex or face in our mesh, based on some criteria. By using a mathematical function that for any $(x,y,z)$ coordinates we'll have a value $f(x,y,z)$ that can be scaled within our range of colors.

So let's have a function:

    @FunctionalInterface
    public interface DensityFunction<T> {
        Double eval(T p);
    }

    private DensityFunction<Point3D> density;

Let's find the extreme values, by evaluating the given function in all the vertices, using parallel streams:

    private double min, max;

    public void updateExtremes(List<Point3D> points){
        max=points.parallelStream().mapToDouble(density::eval).max().orElse(1.0);
        min=points.parallelStream().mapToDouble(density::eval).min().orElse(0.0);
        if(max==min){
            max=1.0+min;
        }
    }

Finally, we assign the color to every vertex in every face, by evaluating the given function in all the vertices, using parallel streams:
    
    public int mapDensity(Point3D p){
        int f=(int)((density.eval(p)-min)/(max-min)*colors);
        if(f<0){
            f=0;
        }
        if(f>=colors){
            f=colors-1;
        }
        return f;
    }

    public int[] updateFacesWithDensityMap(List<Point3D> points, List<Point3D> faces){
        return faces.parallelStream().map(f->{
                int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z;
                int t0=mapDensity(points.get(p0));
                int t1=mapDensity(points.get(p1));
                int t2=mapDensity(points.get(p2));
                return IntStream.of(p0, t0, p1, t1, p2, t2);
            }).flatMapToInt(i->i).toArray();
    }

    mesh.getFaces().setAll(updateFacesWithDensityMap(listVertices, listFaces));

Did I say I love Java 8??? You can see now how the strategy of using lists for vertices, textures and faces has clear adventages over the float arrays.

Let's run some example, using the IcosahedronMesh classs from F(X)yz:
    
    IcosahedronMesh ico = new IcosahedronMesh(5,1f);
    ico.setTextureModeVertices3D(1600,p->(double)p.x*p.y*p.z);
    Scene scene = new Scene(new Group(ico), 600, 600, true, SceneAntialiasing.BALANCED);
    primaryStage.setScene(scene);
    primaryStage.show();     

This is the result:


Impressive, right? After a long explanation, we can happily say: yes! we can color every single triangle or vertex on the mesh!

And we could even move the colors, creating an smooth animation. For this we only need to update the faces (vertices and texture coordinates are the same). This video shows one:


More features

More? In this post? No! I won't extend it anymore. I just post this picture:




And refer you to all these available 3D shapes and more at F(X)yz repository. If I have the time, I'll try to post about them in a second part.

Conclusions

JavaFX 3D API in combination with Java 8 new features has proven really powerful in terms of rendering complex meshes. The API can be easily extended to create libraries or frameworks that help the developer in case 3D features are required.

We are  far from others (Unity 3D, Three.js, ... to say a few), but with the collaboration of the great JavaFX community we can shorten this gap.

Please, clone the repository, test it, create pull requests, issues, feature requests, ... get in touch with us, help us to keep this project alive and growing.

Also visit StackOverflow and ask questions there using these tags: javafx, javafx-8 and the new javafx-3d). You never know where a good question may take you! And the answers will help others developers too.

A final word to give a proper shout-out to Sean Phillips and Jason Pollastrini, founders of the F(x)yz library, for starting an outstanding project.


Sunday, June 9, 2013

Leap Motion Controller and JavaFX: A new touch-less approach

Hi, it's been a while since my last post, but this first half of the year I've been quite busy at work. So I had to put on hold most of my JavaFX, Raspberry Pi and Arduino projects.

In all this time I could (even) afford only one distraction, because the device really deserve it!

In April I had the chance to get involved in the Leap Motion Developer program (thanks for that to Jim Weaver and Simon Ritter, and of course, to the Leap Motion staff), and since I received the little but powerful device at home, I've been playing around with it in several JavaFX based projects. 

So this post is a little briefing of the few projects I've done with the Leap Motion controller and JavaFX, most of them just as a Proof of Concept.

As the Leap SDK is private for now (though they intend to make it public soon), I won't release any code, just snippets, few screenshots and short videos.

At a glance, this is what I'll cover:
  • The Leap Motion Controller, what you can expect: hands, fingers, tools and basic gestures.
  • The JavaFX and the Leap threads, change listeners to the rescue.
  • POC #1. Moving 2D shapes on a JavaFX scene, trapping basic gestures with the Leap.
  • POC #2. Physics 2D worlds and Leap, a JavaFX approach.
  • POC #3. JavaFX 3D and Leap, with JDK8 early preview and openJFX Project.
I won't go in much detail regarding the Leap itself. There're plenty of videos out there. If you don't know about it yet, check these, for instance:
For those of you already on the pre-order list, the 22nd of July it's already there... be (just a little bit more) patient! For those who haven't decided to buy one yet, maybe this reading will help you make up your mind.

Let's go!

1. The Leap Motion Controller

After you plug in the Leap Motion device in your USB port (Windows, Mac and Linux OS), and download and install its sofware, you can try the Leap Visualizer, a bundled application which allows you to learn and discover the magic of the Leap.



As soon as you launch it, you can virtually see your hands and fingers moving around the screen. It's really impressive because of the high precision of the movements due to the high frequency in which the Leap scans.

Activating hands and fingers visualization you can realize those are the basics of the model provided: the Leap will detect none, one or several hands, and several fingers in each one of them. For each hand, it will show, for instance, its position, where it points at (hand direction) and its palm normal. For fingers, you'll get their position and where they point at. Also you can get hand or fingers velocity.

These positions, directions and velocities of your real hands and fingers are 3D vectors, refered to a right-handed Cartesian coordinate system, with the origin at the center of the device and the X and Z axes lying in the horizontal plane and the Y axis is vertical.

It's important to notice you'll have to convert these coordinates to the ones of the screen if you want to display and move anything on it. For that, you need to calculate where the vector of the hand or finger direction intersects with the plane of the screen. 

The Leap device performs complete scans of its surroundings, with an effective range approximately from 25 to 600 millimeters above the device (1 inch to 2 feet). Each scan defines a frame, with all the data associated.

The scan rate is really high, that's what makes the Leap so impressive and accurate compared to similar devices. Depending on your CPU and the amount of data analyzed, the range of processing latency goes from 2 ms to 33 ms, giving rates from 30 to 500 fps.


Besides directions, a basic collection of gestures is also provided: key tap or screen tap, swipe and circle gestures are tracked by comparing the finger movements through different frames.


The good thing of having access to all frames data is that you can define your custom gestures, and try to find them analyzing a relative short collection of frames, all over again.

To end this brief intro to the great Leap Motion device, let's say that being on the Developer Program you can get your SDK for many programming languages, such as Java, JavaScript, C++, C#, Objetive C or Phyton. 

In terms of Java code, all you need to do is extend the Listener class provided by the SDK and basically override the onFrame method, and let the magic begin.

2. The JavaFX and the Leap threads
 
Having a Leap Motion Controller means you can interact with your applications in a very different way you're used to. For that, you just need to integrate the Leap events, in terms of movement or actions, in your apps.

In a JavaFX based application, one easy way to do this is by adding ObjectProperty<T> objects to the LeapListener class in order to set desired values at every frame using Vector, Point2D, Point3D, CircleGesture,... and then implement their related public ObservableValue<T> methods. 

Then, in the JavaFX thread, an anonimous ChangeListener<T> class can be added to listen for any change in the ObservableValue. Special care must be taken here, as anything related to the UI must be deal by Platform.runLater().

The next proof of concept samples will try to explain this.

3. POC #1. Moving 2D shapes on a JavaFX scene

Let's say we want to move a node in the scene with our hand as a first simple POC.

We create two classes: SimpleLeapListener class, that extends Listener, where we just set at every frame the coordinates of the screen where the hand points at:

public class SimpleLeapListener extends Listener {

    private ObjectProperty<Point2D> point=new SimpleObjectProperty<>();
    
    public ObservableValue<Point2D> pointProperty(){ return point; }
    
    @Override
    public void onFrame(Controller controller) {
        Frame frame = controller.frame();
        if (!frame.hands().empty()) {
            Screen screen = controller.calibratedScreens().get(0);
            if (screen != null && screen.isValid()){
                Hand hand = frame.hands().get(0);
                if(hand.isValid()){
                    Vector intersect = screen.intersect(hand.palmPosition(),hand.direction(), true);
                    point.setValue(new Point2D(screen.widthPixels()*Math.min(1d,Math.max(0d,intersect.getX())),
                            screen.heightPixels()*Math.min(1d,Math.max(0d,(1d-intersect.getY())))));
                }
            }
        }
    }
}

And LeapJavaFX, our JavaFX class, that listen to changes in this point and reflect them on the scene:

 
public class LeapJavaFX extends Application { 
    private SimpleLeapListener listener = new SimpleLeapListener();
    private Controller leapController = new Controller();
    
    private AnchorPane root = new AnchorPane();
    private Circle circle=new Circle(50,Color.DEEPSKYBLUE);
    
    @Override
    public void start(Stage primaryStage) {
        
        leapController.addListener(listener);        
        circle.setLayoutX(circle.getRadius());
        circle.setLayoutY(circle.getRadius());
        root.getChildren().add(circle);
        final Scene scene = new Scene(root, 800, 600);        
        
        listener.pointProperty().addListener(new ChangeListener<point2d>(){
            @Override 
            public void changed(ObservableValue ov, Point2D t, final Point2D t1) {
                Platform.runLater(new Runnable(){
                    @Override 
                    public void run() {
                        Point2D d=root.sceneToLocal(t1.getX()-scene.getX()-scene.getWindow().getX(),
                                                    t1.getY()-scene.getY()-scene.getWindow().getY());
                        double dx=d.getX(), dy=d.getY();
                        if(dx>=0d && dx<=root.getWidth()-2d*circle.getRadius() && 
                           dy>=0d && dy<=root.getHeight()-2d*circle.getRadius()){
                            circle.setTranslateX(dx);
                            circle.setTranslateY(dy);                                
                        }
                    }
                });
            }
        });
        
        primaryStage.setScene(scene);
        primaryStage.show();
    }
    @Override
    public void stop(){
        leapController.removeListener(listener);
        
    }
}

Pretty simple, isn't it? This short video shows the result.


Here goes a second sample, based on the same idea, one circle per detected hand it's displayed, with its radius growing or shrinking according the Z distance of the hand to the Leap. When key tap kind of gestures are detected, a shadow circle is shown where the tap occurs, moved from the previous tap location with an animation of the changes in both transition and scale properties. 

Here you can see it in action:
 


4. POC #2. Physics 2D worlds and Leap, a JavaFX approach

When Toni Epple saw this video, he suggested me to add some physics to the mix, so I started learning from his blog posts about JavaFX and JBox2D, the Java port of the popular Box2D physics engine. Using his amazing work I was able to create a simple World, add some dynamic bodies and static walls to the boundaries, and a static big circle which I could move with the Leap, as in the previous samples. Thank you, Toni, your work is really inspiring!

Here is a code snippet of the JavaFX class.

public class PhysicsLeapJavaFX extends Application { 
    private SimpleLeapListener listener = new SimpleLeapListener();
    private Controller leapController = new Controller();
    
    private Button button=new Button("Add Ball");
    private AnchorPane root = new AnchorPane();
    private AnchorPane pane = new AnchorPane();
    private Body myCircle=null;
    
    private World world=null;
    private WorldView worldView=null;
    private final float worldScale=50f;
    private final float originX=4f, originY=8f;
    private final float radius=1f;

    @Override
    public void start(Stage primaryStage) {
        
        leapController.addListener(listener); 
        
        world = new World(new Vec2(0, 0f)); // No gravity
        // 200x400 -> world origin->(4f, 8f), Y axis>0 UP        
        worldView=new WorldView(world, originX*worldScale, originY*worldScale, worldScale);
        
        AnchorPane.setBottomAnchor(pane, 20d); AnchorPane.setTopAnchor(pane, 50d);
        AnchorPane.setLeftAnchor(pane, 20d);   AnchorPane.setRightAnchor(pane, 20d);
        // root: 800x600, pane: 760x530, worldScale= 50 -> world dimensions: 15.2f x 10.6f 
        pane.getChildren().setAll(worldView);        
                
        NodeManager.addProvider(new MyNodeProvider());
        
        button.setLayoutX(30); button.setLayoutY(15);
        button.setOnAction(new EventHandler<ActionEvent>(){
            @Override
            public void handle(ActionEvent t) {
                Body ball=new CircleShapeBuilder(world).userData("ball")
                            .position(0f, 4f)
                            .type(BodyType.DYNAMIC).restitution(1f).density(0.4f)
                            .radius(0.5f).friction(0f)
                            .build();
                ball.setLinearVelocity(new Vec2(4,2));
                ball.setLinearDamping(0f);
            }            
        });
        
        myCircle=new CircleShapeBuilder(world).userData("hand1").position(0f, 2f)
                .type(BodyType.STATIC).restitution(1f).density(1)
                .radius(radius).friction(0f)
                .build();
        new BoxBuilder(world).position(3.6f, 8f).restitution(1f).friction(0f)
                              .halfHeight(0.1f).halfWidth(7.7f).build();
        new BoxBuilder(world).position(3.6f, -2.6f).restitution(1f).friction(0f)
                              .halfHeight(0.1f).halfWidth(7.7f).build();
        new BoxBuilder(world).position(-4f, 2.7f).restitution(1f).friction(0f)
                              .halfHeight(5.4f).halfWidth(0.1f).build();
        new BoxBuilder(world).position(11.2f, 2.7f).restitution(1f).friction(0f)
                              .halfHeight(5.4f).halfWidth(0.1f).build();
        
        root.getChildren().addAll(button, pane);
        final Scene scene = new Scene(root, 800, 600);        
        
        listener.pointProperty().addListener(new ChangeListener<point2D>(){
            @Override 
            public void changed(ObservableValue<? extends Point2D> ov, Point2D t, final Point2D t1) {
                Platform.runLater(new Runnable(){
                    @Override 
                    public void run() {
                        Point2D d=pane.sceneToLocal(t1.getX()-scene.getX()-scene.getWindow().getX()-root.getLayoutX(),
                                                    t1.getY()-scene.getY()-scene.getWindow().getY()-root.getLayoutY());
                        double dx=d.getX()/worldScale, dy=d.getY()/worldScale;
                        if(dx>=0.1 && dx<=pane.getWidth()/worldScale-2d*radius-0.1 && 
                           dy>=0.1 && dy<=pane.getHeight()/worldScale-2d*radius-0.1){
                            myCircle.setTransform(new Vec2((float)(dx)-(originX-radius),
                                                           (originY-radius)-(float)(dy)),
                                                  myCircle.getAngle());
                        }
                    }
                });
            }
        });
        listener.keyTapProperty().addListener(new ChangeListener<Boolean>(){
            @Override public void changed(ObservableValue<? extends Boolean> ov, Boolean t, final Boolean t1) {
                if(t1.booleanValue()){
                    Platform.runLater(new Runnable(){
                        @Override public void run() {
                            button.fire();
                        }
                    });
                }
            }
        });

        primaryStage.setTitle("PhysicsLeapJavaFX Sample");
        primaryStage.setScene(scene);
        primaryStage.show();
    }
    @Override
    public void stop(){
        leapController.removeListener(listener);
        
    }
}

I've added some gesture recognition to fire the button when a key tap gesture it's done. Besides, it's quite convinient to smooth the readings from the Leap, taking the average of the last positions instead of the position for every frame. So let's modify the SimpleLeapListener class adding a size limited LinkedList collection to store the last 30 positions, and also enable the key tap gestures:

public class SimpleLeapListener extends Listener {

    private ObjectProperty<Point2D> point=new SimpleObjectProperty<>();
    public ObservableValue<Point2D> pointProperty(){ return point; }
    private LimitQueue<Vector> positionAverage = new LimitQueue<Vector>(30);
    
    private BooleanProperty keyTap= new SimpleBooleanProperty(false);
    public BooleanProperty keyTapProperty() { return keyTap; }

    @Override
    public void onFrame(Controller controller) {
        Frame frame = controller.frame();
        if (!frame.hands().empty()) {
            Screen screen = controller.calibratedScreens().get(0);
            if (screen != null && screen.isValid()){
                Hand hand = frame.hands().get(0);
                if(hand.isValid()){
                    Vector intersect = screen.intersect(hand.palmPosition(),hand.direction(), true);
                    positionAverage.add(intersect);
                    Vector avIntersect=Average(positionAverage);
                    point.setValue(new Point2D(screen.widthPixels()*Math.min(1d,Math.max(0d,avIntersect.getX())),
                            screen.heightPixels()*Math.min(1d,Math.max(0d,(1d-avIntersect.getY())))));
                }
            }
        }
        keyTap.set(false);
        GestureList gestures = frame.gestures();
        for (int i = 0; i < gestures.count(); i++) {
            if(gestures.get(i).type()==Gesture.Type.TYPE_KEY_TAP){
                keyTap.set(true); break;
            }
        }
    }
    
    private Vector Average(LimitQueue<Vector> vectors)
    {
        float vx=0f, vy=0f, vz=0f;
        for(Vector v:vectors){
            vx=vx+v.getX(); vy=vy+v.getY(); vz=vz+v.getZ();
        }
        return new Vector(vx/vectors.size(), vy/vectors.size(), vz/vectors.size());
    }
    
    private class LimitQueue<E> extends LinkedList<E> {
        private int limit;
        public LimitQueue(int limit) {
            this.limit = limit;
        }

        @Override
        public boolean add(E o) {
            super.add(o);
            while (size() > limit) { super.remove(); }
            return true;
        }
    }
}

And finally, here you can see it in action:

 

5. POC #3. JavaFX 3D and Leap, with JDK8

The last part of this post will cover my experiments with the recent early access releases of JDK8, build b92 on the time of this writing, as JavaFX 3D is enabled since b77. Here you can read about the 3D features planned for JavaFX 8. 

Installing JDK8 is easy, and so is creating a JavaFX scene with 3D primitives like spheres, boxes or cylinders, or even user-defined shapes by meshes, defined by a set of points, texture coordinates and faces.



There are no loaders for existing 3D file formats (obj, stl, Maya, 3D Studio, ...). So if you want to import a 3D model, you need one. 

The first place to start looking is in OpenJFX, the open source code home of JavaFX development.

You'll find in their repository between their experiments, as they call them, a 3D Viewer. So download it from their repository, build it and see what you can do!



For instance, you can drag and drop an obj model. The one in the picture is a model of a Raspberry Pi, downloaded from here.

For other formats not yet supported, you can go to InteractiveMesh.org, where August Lammersdorf has released several importers (3ds, obj and stl) for JDK8 b91+. Kudos to him for his amazing work and contributions!

I'll use his 3ds importer and the Hubble Space Telescope model from NASA, to add this  model to a JavaFX scene, and then I'll try to add touch-less rotation and scaling options.

First of all, we need a little mathematical background here, as rotating a 3D model in JavaFX requires a rotation axis and angle. If we have several rotations to make at the same time we need to construct a rotation matrix, and after that get the rotation axis and its angle.

As the Leap provides three rotations from a hand: pitch (around its X axis) , yaw (around its Y axis) and roll (around its Z axis), providing the model is already well orientated (otherwise we'll need to add previous rotations too), the rotation matrix will be: 

 where:
  
So:
  
Then, the angle and the rotation unitary axis components can be easily computed from:


Special care has to be taken when converting Leap roll, pitch and yaw angles values to those required for the JavaFX coordinate system (180º rotated from X axis).

With this equations, we just need to listen to hand rotation changes and compute the rotation axis and angle values on every change to rotate accordingly the 3D model. 

So now we're ready to try our POC 3D sample: import a 3ds model and perform rotations with our hand through the Leap Motion Controller.
 
The following code snippet shows how it is done for the JavaFX class:

public class JavaFX8 extends Application {
    private AnchorPane root=new AnchorPane();
    private final Rotate cameraXRotate = new Rotate(0,0,0,0,Rotate.X_AXIS);
    private final Rotate cameraYRotate = new Rotate(0,0,0,0,Rotate.Y_AXIS);
    private final Translate cameraPosition = new Translate(-300,-550,-700);
    private SimpleLeapListener listener = new SimpleLeapListener();
    private Controller leapController = new Controller();
    
    @Override 
    public void start(Stage stage){
        final Scene scene = new Scene(root, 1024, 800, true);
        final Camera camera = new PerspectiveCamera();
        camera.getTransforms().addAll(cameraXRotate,cameraYRotate,cameraPosition);
        scene.setCamera(camera);
        controller.addListener(listener);

        TdsModelImporter model=new TdsModelImporter();
        try {
            URL hubbleUrl = this.getClass().getResource("hst.3ds");
            model.read(hubbleUrl);
        }
        catch (ImportException e) {
            System.out.println("Error importing 3ds model: "+e.getMessage());
            return;
        }
        final Node[] hubbleMesh = model.getImport();
        model.close();
        final Group model3D = new Group(hubbleMesh);
 
        final PointLight pointLight = new PointLight(Color.ANTIQUEWHITE);
        pointLight.setTranslateX(800);
        pointLight.setTranslateY(-800);
        pointLight.setTranslateZ(-1000);
        root.getChildren().addAll(model3D,pointLight);

        listener.posHandLeftProperty().addListener(new ChangeListener<Point3D>(){
            @Override public void changed(ObservableValue<? extends Point3D> ov, Point3D t, final Point3D t1) {
                Platform.runLater(new Runnable(){
                    @Override public void run() {
                        if(t1!=null){
                            double roll=listener.rollLeftProperty().get();
                            double pitch=-listener.pitchLeftProperty().get();
                            double yaw=-listener.yawLeftProperty().get();
                            matrixRotateNode(model3D,roll,pitch,yaw);
                        }
                    }
                });
            }
        });
    }
    private void matrixRotateNode(Node n, double alf, double bet, double gam){
        double A11=Math.cos(alf)*Math.cos(gam);
        double A12=Math.cos(bet)*Math.sin(alf)+Math.cos(alf)*Math.sin(bet)*Math.sin(gam);
        double A13=Math.sin(alf)*Math.sin(bet)-Math.cos(alf)*Math.cos(bet)*Math.sin(gam);
        double A21=-Math.cos(gam)*Math.sin(alf);
        double A22=Math.cos(alf)*Math.cos(bet)-Math.sin(alf)*Math.sin(bet)*Math.sin(gam);
        double A23=Math.cos(alf)*Math.sin(bet)+Math.cos(bet)*Math.sin(alf)*Math.sin(gam);
        double A31=Math.sin(gam);
        double A32=-Math.cos(gam)*Math.sin(bet);
        double A33=Math.cos(bet)*Math.cos(gam);
        
        double d = Math.acos((A11+A22+A33-1d)/2d);
        if(d!=0d){
            double den=2d*Math.sin(d);
            Point3D p= new Point3D((A32-A23)/den,(A13-A31)/den,(A21-A12)/den);
            n.setRotationAxis(p);
            n.setRotate(Math.toDegrees(d));                    
        }
    }
}

And this code snippet shows how it is done for the Leap Listener class:

public class SimpleLeapListener extends Listener {
    private ObjectProperty<Point3D> posHandLeft=new SimpleObjectProperty<Point3D>();
    private DoubleProperty pitchLeft=new SimpleDoubleProperty(0d);
    private DoubleProperty rollLeft=new SimpleDoubleProperty(0d);
    private DoubleProperty yawLeft=new SimpleDoubleProperty(0d);
    private LimitQueue<Vector> posLeftAverage = new LimitQueue<Vector>(30);
    private LimitQueue<Double> pitchLeftAverage = new LimitQueue<Double>(30);
    private LimitQueue<Double> rollLeftAverage = new LimitQueue<Double>(30);
    private LimitQueue<Double> yawLeftAverage = new LimitQueue<Double>(30);

    public ObservableValue<Point3D> posHandLeftProperty(){ return posHandLeft; }
    public DoubleProperty yawLeftProperty(){ return yawLeft; }
    public DoubleProperty pitchLeftProperty(){ return pitchLeft; }
    public DoubleProperty rollLeftProperty(){ return rollLeft; }
    
    @Override
    public void onFrame(Controller controller) {
        Frame frame = controller.frame();
        if (!frame.hands().empty()) {
            Screen screen = controller.calibratedScreens().get(0);
            if (screen != null && screen.isValid()){
                Hand hand = frame.hands().get(0);
                if(hand.isValid()){
                    pitchLeftAverage.add(new Double(hand.direction().pitch()));
                    rollLeftAverage.add(new Double(hand.palmNormal().roll()));
                    yawLeftAverage.add(new Double(hand.direction().yaw()));                    
                    pitchLeft.set(dAverage(pitchLeftAverage).doubleValue());
                    rollLeft.set(dAverage(rollLeftAverage).doubleValue());
                    yawLeft.set(dAverage(yawLeftAverage).doubleValue());
                    
                    Vector intersect = screen.intersect(hand.palmPosition(),hand.direction(), true);
                    posLeftAverage.add(intersect);
                    Vector avIntersect=Average(posLeftAverage);
                    posHandLeft.setValue(new Point3D(screen.widthPixels()*Math.min(1d,Math.max(0d,avIntersect.getX())),
                            screen.heightPixels()*Math.min(1d,Math.max(0d,(1d-avIntersect.getY()))),
                            hand.palmPosition().getZ()));
                }
            }                
        }
    }
    private Double dAverage(LimitQueue<Double> vectors){
        double vx=0;
        for(Double d:vectors){
            vx=vx+d.doubleValue();
        }
        return new Double(vx/vectors.size());
    }
}

In the following video I've added a few more things, which aren't in the previous code: with the hand Z position we can scale the model, and we look for right hand circle gestures, to start an animation to rotate indefinitely the model, till another circle gesture is found, resuming hand rotations.



Conclusions

With these few samples I think you've already shown the impressive potential of a device like the Leap Motion Controller. 

JavaFX as RIA platform can interact with the Leap Motion device nicely and take UI to the next level.

We're all waiting eagerly to the public release of the device, and the opening of the Airspace market, where we'll find all kind of applications to use the Leap with.

This will change definitely the way we interact with our computers for ever.

Thank you for reading, as always, any comment will be absolutely welcome.