SEARCH
Newsletter
Subscribe to get design
tips, latest trends, free
stuff and more.
It doesn't look like an e-mail address

hosting

  Tutorials Flash & Swish Flash Tutorials Creating an Interactive Touch Object, Using CML Constructors and GML Manipulations, Part 1

Creating an Interactive Touch Object, Using CML Constructors and GML Manipulations, Part 1

Adding A GML-Defined Gesture,
Using Method 3: CML Constructors & GML Manipulations

In GestureWorks3 we designed Creative Markup Language (CML) to simplify the development of multitouch applications by providing advanced methods in Flash for developers to create interactive objects and containers that can be manipulated using configurable gestures. In each application created with GestureWorks 3 there are two associated xml documents: "my_application.cml" and "my_gestures.gml", which are located in the folder "bin/library/cml" and "bin/library/gml" respectively.

As part of the CML tool kit in GestureWorks 3 there are multiple built-in components that can be accessed using "my_application.cml". In this example an ImageElement component from the ComponentKit can be used to dynamically load an image into a touch object ("touchContainer"), set it's properties and place it on stage. For example:

<CanvasKit>
<ComponentKit>
<TouchContainer id="touchContainer" x="300" y="300" rotation="-45" dimensionsTo="image">
<ImageElement id="image" src="library/assets/blimp0.jpg"/>
<GestureList>
</GestureList>
</TouchContainer>
</ComponentKit>
</CanvasKit>

To attach a gesture to a touch object defined in the CML document "my_application.cml" simply add a gesture between the "" tags associated the touchContiner. For example:

<CanvasKit>
<ComponentKit>
<TouchContainer id="touchContainer" x="300" y="300" rotation="-45" dimensionsTo="image">
<ImageElement id="image" src="library/assets/blimp0.jpg"/>
<GestureList>
<Gesture ref="n-drag" gestureOn="true"/>
</GestureList>
</TouchContainer>
</ComponentKit>
</CanvasKit>

This adds the gesture "n-drag" to the touch object ("touchContainer") and effectively activates gesture analysis and processing. Any touch point placed on the touch object is added to the local cluster. The touch object will inspect touch point clusters for a matching gesture "action" and then calculate cluster motion in the x and y direction. The result is then processed and prepared for mapping.

The traditional event model in flash employs the explicit use of event listeners and handlers to manage gesture events on a touch object. However in GestureWorks3 Gesture Markup Language can be used to directly control how gesture events map to touch object properties and therefor how touch objects are transformed. These tools are integrated into the gesture analysis engine inside each touch object and allow custom gesture manipulations and property updates to occur on each touch object.

<Gesture id="n-drag" type="drag">
<match>
<action>
<initial>
<cluster point_number="0" point_number_min="1" point_number_max="5" translation_threshold="0"/>
</initial>
</action>
</match>
<analysis>
<algorithm>
<library module="drag"/>
<returns>
<property id="drag_dx"/>
<property id="drag_dy"/>
</returns>
</algorithm>
</analysis>
<processing>
<inertial_filter>
<property ref="drag_dx" release_inertia="false" friction="0.996"/>
<property ref="drag_dy" release_inertia="false" friction="0.996"/>
</inertial_filter>
</processing>
<mapping>
<update>
<gesture_event>
<property ref="drag_dx" target="x" delta_threshold="true" delta_min="0.01" delta_max="100"/>
<property ref="drag_dy" target="y" delta_threshold="true" delta_min="0.01" delta_max="100"/
</gesture_event>
</update>
</mapping>
</Gesture>

In this example the gesture "n-drag" as defined the root GML document "my_gestures.gml" directly maps the values returned from gesture processing "drag_dx" and "drag_dy" to the "target" "x" and "y". Internally the delta values are added to the "$x" and "$y" properties of the touch object. This translates the object on stage to the center of the touch point cluster. As the points move, so does the touch object. This effectively "drags" to touch object across the stage.

As shown in this example a single* gesture (defined in the GML) is attached to a single touch object (defined in the CML). However multiple independent gestures can be attached to multiple touch objects using a single GML and a single CML document.

The benefit of using the CML to construct the touch object and GML to handle gesture events is that complex interactive media object can be created with sophisticated gesture based manipulations in a few simple lines of code. The details of component creation, media loading and unloading, display layouts, gesture interactions and event management are all handled automatically in the CML and GML framework in GestureWorks3.

The tools available as part of the CML and GML internal framework allows developers to rapidly create configurable Flash applications that can be completely described using editable XML documents. This method mimics best practices used for dynamically assigning object based media assets and formatting, providing a framework that fully externalizes object gesture descriptions and interactions. The result of this approach allows developers to efficiently refine UI/UX interactions, layouts and content without the need to recompile applications.

   
subscribe to newsletter