Thursday, January 8, 2015

Creating and Texturing JavaFX 3D Shapes

Hi there!

It's been a while since my last post, and it seems I've just said the same on my last one... but you know, many stuff in between, and this blog has a tradition of long posts, those that can't be delivered on a weekly/monthly basis. But if you're a regular reader (thank you!) you know how this goes.

This post is about my last developments in JavaFX 3D, after working in close collaboration with a bunch of incredible guys for the last months.

For those of you new to this blog, I've already got a few posts talking about JavaFX 3D. My last recent one about the Rubik's Cube: RubikFX: Solving the Rubik's Cube with JavaFX 3D,  and other about the Leap Motion controller: Leap Motion Controller and JavaFX: A new touch-less approach.

I'll cover in this post the following topics:
Before getting started, did I mention my article "Building castles in the Sky. Use JavaFX 3D to model historical treasures and more" has been published in the current issue of Java Magazine?

In a nutshell,  this article describes a multi model JavaFX 3D based application, developed for the virtual immersion in Cultural Heritage Buildings, through models created by Reverse Engineering with Photogrammetry Techniques. The 3D model of the Menéndez Pelayo Library in Santander, Spain, is used throughout the article as an example of a complex model.

You can find this application and, thanks to Óscar Cosido, a free model of the Library here.

Leap Motion Skeletal Tracking Model

Since my first post about Leap Motion, I've improved the 3D version after Leap Motion released their  version 2 that includes an skeletal tracking model. 

I haven't had the chance to blog about it, but this early video shows my initial work. You can see that the model now includes bones, so a more realistic hand can be built.

I demoed a more advanced version at one of my JavaOne talks with the incredibles James Weaver, Sean Phillips and Zoran Sevarac. Sadly, Jason Pollastrini couldn't make it, but he was part of the 3D-team.


If you are interested, all the code is available here. Go, fork it and play with it if you have a Leap Motion controller.

 Yes, we did have a great time there.

The session was great. In fact you can watch it now at Parleys.

We had even a special guest: John Yoon, a.k.a. @JavaFX3D

Skinning Meshes and Leap Motion

And then I met Alexander Kouznetsov.

It was during the Hackergarten 3D session, where Sven Reimers and I were hacking some JavaFX 3D stuff, when he showed up, laptop in backpack, ready for some hacking. There's no better trick than asking a real developer:  I bet you're not able to hack this... to get it done!

So the challenge was importing a rigged hand in JSON format to use a SkinningMesh in combination with the new Leap Motion skeletal tracking model. As the one and only John Yoon would show later in his talk:
"In order to animate a 3D model, you need a transform hierarchy to which the 3D geometry is attached. 
The general term for this part of the pipeline is “rigging” or “character setup”.
Rigging is the process of setting up your static 3D model for computer-generated animation, to make it animatable."
He was in charge of animating the Duke for the chess demo shown at the Keynote of JavaOne 2013. As shown in the above picture, this required a mesh, a list of joints, weights and transformations, binding the inner 'bones' with the surrounding external mesh, so when the former were moved the latter was deformed, creating the desired animation effect.

The SkinningMesh class in the 3DViewer project was initially designed for Maya, and we had a rigged hand in Three.js model in JSON format.

So out of the blue Alex built an importer, and managed to get the mesh of the hand by reverse engineering. Right after that he solved the rest of the components of the skinningMesh. The most important part was the binding of the transformations between joints.

        Affine[] bindTransforms = new Affine[nJoints];
        Affine bindGlobalTransform = new Affine();
        List<Joint> joints = new ArrayList<>(nJoints);
        List<Parent> jointForest = new ArrayList<>();
        for (int i = 0; i < nJoints; i++) {
            JsonObject bone = object.getJsonArray("bones").getJsonObject(i);
            Joint joint = new Joint();
            String name = bone.getString("name");
            JsonArray pos = bone.getJsonArray("pos");
            double x = pos.getJsonNumber(0).doubleValue();
            double y = pos.getJsonNumber(1).doubleValue();
            double z = pos.getJsonNumber(2).doubleValue();
            bindTransforms[i] = new Affine();
            int parentIndex = bone.getInt("parent");
            if (parentIndex == -1) {
                bindTransforms[i] = new Affine(new Translate(-x, -y, -z));
            } else {
                Joint parent = joints.get(parentIndex);
                bindTransforms[i] = new Affine(new Translate(
                        -x - parent.getLocalToSceneTransform().getTx(), 
                        -y - parent.getLocalToSceneTransform().getTy(), 
                        -z - parent.getLocalToSceneTransform().getTz()));
            joint.getChildren().add(new Axes(0.02));

This was the first animation with the model:


The axes are shown at every joint. Observe how easy is to deform a complex mesh just by rotating two joints:

       Timeline t = new Timeline(new KeyFrame(Duration.seconds(1), 
                new KeyValue(joints.get(5).rx.angleProperty(), 90),
                new KeyValue(joints.get(6).rx.angleProperty(), 90)));

With a working SkinningMesh, it was just time for adding the skeletal tracking model from Leap Motion. 

First, we needed to match Bones to joints, and then we just needed to apply the actual orientation of every bone to the corresponding joint transformation.

        listener = new LeapListener();
                List<finger$gt; fingersLeft=listener.getFingersLeft();
                        .filter(finger -> finger.isValid())
                        .forEach(finger -> {
                                .filter(bone -> bone.isValid() && bone.length()>0)
                                .forEach(bone -> {
                                        Joint joint = getJoint(false,finger,bone);
                                        Vector cross = bone.direction().cross(previousBone.direction());
                                        double angle = bone.direction().angleTo(previousBone.direction());
                                        joint.rx.setAxis(new Point3D(cross.getX(),-cross.getY(),cross.getZ()));

The work was almost done! Back from JavaOne I had the time to finish the model, adding hand movements and drawing the joints:

This video sums up most of what we've accomplished:

If you are interested in this project, all the code is here. Feel free to clone or fork it. Pull requests will be very wellcome.


One thing leads to another... And Johan Vos and Sven asked me to join them in a project to create a Tweet Wall with JavaFX 3D for Devoxx 2014. JavaFX 3D? I couldn't say no even if I wasn't attending!

Our first proposal (not the one Sven finally accomplished) was based on the F(X)yz library from Sean and Jason: a SkyBox as a container, with several tori inside, where tweets were rotating over them:

Needless to say, we used the great Twitter4J API for retrieving new tweets with the hashtag #Devoxx.

The first challenge here was figuring out how to render the tweets over each torus. The solution was based on the use of an snapshot of the tweet (rendered in a background scene) that would serve as the diffuse map image of the PhongMaterial assigned to the torus.

To second was creating a banner effect rotating the tweets over they tori. To avoid artifacts, a segmented torus was built on top of the first one,  cropping the faces of a regular torus, so the resulting mesh will be textured with the image.

This is our desired segmented torus. 

In the next section, we'll go into details of how we could accomplish this shape.

Creating new 3D shapes

Note to beginners: For an excelent introduction to  JavaFX 3D, have a look to the 3D chapters on these books: JavaFX 8 Introduction by Example an JavaFX 8 Pro: A Definitive Guide to Building Desktop, Mobile, and Embedded Java Clients.

To create this mesh in JavaFX 3D we use a TriangleMesh as a basis for our mesh, where we need to provide float arrays of vertices and texture coordinates and one int array of vertex and texture indices for defining every triangle face.

Since a torus can be constructed from a rectangle, by gluting both pairs of opposite edges together with no twists, we could use a 2D rectangular grid in a local system ($\theta$,$\phi$), and map every point with these equations:

\[X=(R+r \cos\phi) \cos\theta\\Z=(R+r \cos\phi) \sin\theta\\Y=r \sin\phi\]

So based on this grid (with colored borders and triangles for clarity):

 we could create this torus (observe how the four corners of the rectangle are joinned together in one single vertex):

Now if we want to segment the mesh, we can get rid of a few elements from the borders. From the red inner grid, we could have a segmented torus now:

Vertices coordinates

As we can see in the SegmentedTorusMesh class from the F(x)yz library, generating the vertices for the mesh is really easy, based in the above equations, the desired number of subdivisions (20 and 16 in the figures) and the number of elements cropped in both directions (4):

    private TriangleMesh createTorus(int subDivX, int subDivY, int crop, float R, float r){    
        TriangleMesh triangleMesh = new TriangleMesh();

        // Create points
        List<Point3D> listVertices = new ArrayList<>();
        float pointX, pointY, pointZ;
        for (int y = crop; y <= subDivY-crop; y++) {
            float dy = (float) y / subDivY;
            for (int x = crop; x <= subDivX-crop; x++) {
                float dx = (float) x / subDivX;
                if(crop>0 || (crop==0 && x<subDivX && y<subDivY)){
                    pointX = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.cos((-1d+2d*dx)*Math.PI));
                    pointZ = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.sin((-1d+2d*dx)*Math.PI));
                    pointY = (float) (r*Math.sin((-1d+2d*dy)*Math.PI));
                    listVertices.add(new Point3D(pointX, pointY, pointZ));

Note that we have to convert this collection to a float array. Since there is no such thing as FloatStream, trying to use Java 8 streams, I asked a question at StackOverflow, and as result now we use a very handy FloatCollector to do the conversion:

            .flatMapToDouble(p->new DoubleStream(p.x,p.y,p.z))
            .collect(()->new FloatCollector(listVertices.size()*3), FloatCollector::add, FloatCollector::join)


In case anybody is wondering why we don't use plain float[], using collections instead of simple float arrays allow us to perform mesh coloring (as we'll see later), subdivisions, ray tracing,...using streams and, in many of these cases, parallel streams.

Well, in Jason's words: why TriangleMesh doesn't provide a format that incorporates the use of streams by default...??

Texture coordinates

In the same way, we can create the texture coordinates. We can use the same grid, but now mapping (u,v) coordinates, from (0.0,0.0) on the left top corner to (1.0,1.0) on the right bottom one.

We need extra points for the borders.

        int index=0;
        int width=subDivX-2*crop;
        int height=subDivY-2*crop;
        float[] textureCoords = new float[(width+1)*(height+1)*2];
        for (int v = 0; v <= height; v++) {
            float dv = (float) v / ((float)(height));
            for (int u = 0; u <= width; u++) {
                textureCoords[index] = (float) u /((float)(width));
                textureCoords[index + 1] = dv;


Once we have defined the coordinates we need to create the faces. From JavaDoc:
The term face is used to indicate 3 set of interleaving points and texture coordinates that together represent the geometric topology of a single triangle.
One face is defined by 6 indices: p0, t0, p1, t1, p2, t2, where p0, p1 and p2 are indices into the points array, and t0, t1 and t2 are indices into the texture coordinates array.

For convenience, we'll use two splitted collections of points indices and texture indices.

Based on the above figures, we go triangle by triangle, selecting the three indices position in specific order. This is critical for the surface orientation. Also note that for vertices we reuse indices at the borders to avoid the formation of seams.

        List<Point3D> listFaces = new ArrayList<>();
        // Create vertices indices
        for (int y =crop; y<subDivY-crop; y++) {
            for (int x=crop; x<subDivX-crop; x++) {
                int p00 = (y-crop)*((crop>0)?numDivX:numDivX-1) + (x-crop);
                int p01 = p00 + 1;
                if(crop==0 && x==subDivX-1){
                int p10 = p00 + ((crop>0)?numDivX:numDivX-1);
                if(crop==0 && y==subDivY-1){
                int p11 = p10 + 1;
                if(crop==0 && x==subDivX-1){
                listFaces.add(new Point3D(p00,p10,p11));                
                listFaces.add(new Point3D(p11,p01,p00));

        List<Point3D> listTextures = new ArrayList<>();
        // Create textures indices
        for (int y=crop; y<subDivY-crop; y++) {
            for (int x=crop; <subDivX-crop; x++) {
                int p00 = (y-crop) * numDivX + (x-crop);
                int p01 = p00 + 1;
                int p10 = p00 + numDivX;
                int p11 = p10 + 1;
                listTextures.add(new Point3D(p00,p10,p11));                
                listTextures.add(new Point3D(p11,p01,p00));
Though now we have to join them. The adventages of this approach will be shown later.

        // create faces
        AtomicInteger count=new AtomicInteger();
        int faces[] = return
                Point3D t=listTexture.get(count.getAndIncrement());
                int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z;
                int t0=(int)t.x; int t1=(int)t.y; int t2=(int)t.z;
                return IntStream.of(p0, t0, p1, t1, p2, t2);
        // finally return mesh
        return triangleMesh;

This picture shows how we create the first and last pairs of faces. Note the use of counterclockwise winding to define the front faces, so we have the normal of every surface pointing outwards (to the outside of the screen).

Finally, we can create our banner effect, adding two tori, both solid (DrawMode.FILL) and one of them segmented and textured with an image. This snippet shows the basics:

        SegmentedTorusMesh torus = new SegmentedTorusMesh(50, 42, 0, 500d, 300d); 
        PhongMaterial matTorus = new PhongMaterial(Color.FIREBRICK);
        SegmentedTorusMesh banner = new SegmentedTorusMesh(50, 42, 14, 500d, 300d); 
        PhongMaterial matBanner = new PhongMaterial();
        matBanner.setDiffuseMap(new Image(getClass().getResource("res/Duke3DprogressionSmall.jpg").toExternalForm()));
        Rotate rotateY = new Rotate(0, 0, 0, 0, Rotate.Y_AXIS);
        torus.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY);
        banner.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY);
        Group group.getChildren().addAll(torus,banner);        
        Group sceneRoot = new Group(group);
        Scene scene = new Scene(sceneRoot, 600, 400, true, SceneAntialiasing.BALANCED);
        primaryStage.setTitle("F(X)yz - Segmented Torus");

        final Timeline bannerEffect = new Timeline();
        final KeyValue kv1 = new KeyValue(rotateY.angleProperty(), 360);
        final KeyFrame kf1 = new KeyFrame(Duration.millis(10000), kv1);

to get this animation working:


Playing with textures

The last section of this long post will show you how we can hack the textures from a TriangleMesh to display more advances images over the 3D shape. This will include:
  • Coloring meshes (vertices or faces) 
  • Creating contour plots
  • Using patterns
  • Animating textures
This work is inspired by a question from Álvaro Álvarez on StackOverflow, about coloring individual triangles or individual vertices from a mesh. The inmediate answer would be: no, you can't easily, since for one mesh there's one material with one diffuse color, and it's not possible to assing different materials to different triangles of the same mesh. You could create as many meshes and materials as colors, if this number were really small.

Using textures, was the only way, but for that, following the standard procedure, you will need to color precisely your texture image, to match each triangle with each color. 

In convex polihedra there's at least one net, a 2D arrangement of polygons that can be folded into the faces of the 3D shape. Based on an icosahedron (20 faces), we could use its net to color every face:

And then use the image as texture for the 3D shape:

This was my first answer, but I started thinking about using another approach. What if instead of the above colored net we could create on runtime a small image of colored rectangles, like this:

and trick the texture coordinates and texture indices to find their values in this image instead? Done! The result was this more neat picture:

(The required code to do this is in my answer, so I won't post it here). 

And going a little bit further, if we could create one palette image, with one color per pixel, we could also assign one color to each vertex, and the texture for the rest of the triangle will be interpolated by the scene graph! This was part of a second answer:

Color Palette

With this small class we can create small images with up to 1530 unique colors. The most important thing is they are correlative, so we'll have smooth contour-plots, and there won't be unwanted bumps when intermediate values are interpolated.

To generate on runtime this 40x40 image (2 KB) we just use this short snippet:

        Image imgPalette = new WritableImage(40, 40);
        PixelWriter pw = ((WritableImage)imgPalette).getPixelWriter();
        AtomicInteger count = new AtomicInteger();
        IntStream.range(0, 40).boxed()
                .forEach(y->IntStream.range(0, 40).boxed()
                        .forEach(x->pw.setColor(x, y, Color.hsb(count.getAndIncrement()/1600*360,1,1))));

With it, we can retrieve the texture coordinates for a given point from this image and update the texture coordinates on the mesh:

    public DoubleStream getTextureLocation(int iPoint){
        int y = iPoint/40; 
        int x = iPoint-40*y;
        return DoubleStream.of((((float)x)/40f),(((float)y)/40f));

    public float[] getTexturePaletteArray(){
        return IntStream.range(0,colors).boxed()
            .collect(()->new FloatCollector(2*colors), FloatCollector::add, FloatCollector::join)


Density Maps

Half of the work is done. The other half consists in assigning a color to every vertex or face in our mesh, based on some criteria. By using a mathematical function that for any $(x,y,z)$ coordinates we'll have a value $f(x,y,z)$ that can be scaled within our range of colors.

So let's have a function:

    public interface DensityFunction<T> {
        Double eval(T p);

    private DensityFunction<Point3D> density;

Let's find the extreme values, by evaluating the given function in all the vertices, using parallel streams:

    private double min, max;

    public void updateExtremes(List<Point3D> points){

Finally, we assign the color to every vertex in every face, by evaluating the given function in all the vertices, using parallel streams:
    public int mapDensity(Point3D p){
        int f=(int)((density.eval(p)-min)/(max-min)*colors);
        return f;

    public int[] updateFacesWithDensityMap(List<Point3D> points, List<Point3D> faces){
        return faces.parallelStream().map(f->{
                int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z;
                int t0=mapDensity(points.get(p0));
                int t1=mapDensity(points.get(p1));
                int t2=mapDensity(points.get(p2));
                return IntStream.of(p0, t0, p1, t1, p2, t2);

    mesh.getFaces().setAll(updateFacesWithDensityMap(listVertices, listFaces));

Did I say I love Java 8??? You can see now how the strategy of using lists for vertices, textures and faces has clear adventages over the float arrays.

Let's run some example, using the IcosahedronMesh classs from F(X)yz:
    IcosahedronMesh ico = new IcosahedronMesh(5,1f);
    Scene scene = new Scene(new Group(ico), 600, 600, true, SceneAntialiasing.BALANCED);

This is the result:

Impressive, right? After a long explanation, we can happily say: yes! we can color every single triangle or vertex on the mesh!

And we could even move the colors, creating an smooth animation. For this we only need to update the faces (vertices and texture coordinates are the same). This video shows one:

More features

More? In this post? No! I won't extend it anymore. I just post this picture:

And refer you to all these available 3D shapes and more at F(X)yz repository. If I have the time, I'll try to post about them in a second part.


JavaFX 3D API in combination with Java 8 new features has proven really powerful in terms of rendering complex meshes. The API can be easily extended to create libraries or frameworks that help the developer in case 3D features are required.

We are  far from others (Unity 3D, Three.js, ... to say a few), but with the collaboration of the great JavaFX community we can shorten this gap.

Please, clone the repository, test it, create pull requests, issues, feature requests, ... get in touch with us, help us to keep this project alive and growing.

Also visit StackOverflow and ask questions there using these tags: javafx, javafx-8 and the new javafx-3d). You never know where a good question may take you! And the answers will help others developers too.

A final word to give a proper shout-out to Sean Phillips and Jason Pollastrini, founders of the F(x)yz library, for starting an outstanding project.


  1. Hey Jose
    Your blog has helped me to a great extent. However, I am stuck with a problem. I have a stage where there is a 3D object imported using 3ds importer, and a rigged hand which is controlled by the leap motion. I want to detect(when and where on hand) collision between the hand and the object, I am not able to figure out how to get boundary of a 3ds imported object. Any help or suggestion will do wonders.
    Thank You

    1. Thanks, Riddhish. Detecting collisions in 3D is quite a complex task... In the mentioned project of the skeletal model, I use intersects (here: to detect if one sphere of the hand touches the big sphere. I use the bounding box of the model to check intersections, and every node on the scene has its bounds, included the model you have imported.

      Since you have a triangle mesh for your hand, the next step makes use of ray tracing techniques. As a starting point, check this question at StackOverflow: Find also a full sample here:

  2. Thanks for the discussion in your blog. Torus note was superb and I learnt a lot. I have a small issue. Suppose that I have screen with TriangleMesh based geometry. Now I wish to click on geometry and I wish to get coordinates. I may use something like:
    final double xx;
    final double yy;
    geoScene.setOnMouseClicked (new EventHandler () {
    public void handle(MouseEvent event) {
    xx = event.getX ();
    yy = event.getY ());
    System.out.println(xx + " " + yy);

    while I am unable to get xx and yy after eventhandler routine. It would be helpful if you kindly post access of variables from inner methods like these.

  3. Thanks for your kind words. If you have a look at the Rubik post [], you'll find how you can work with PickResult.

  4. This comment has been removed by a blog administrator.


Note: Only a member of this blog may post a comment.