The Window from Robert Ireland on Vimeo.
Outcomes from the application of methods: a critical reflection
The focus of my project is to represent through the medium of digital animation the psychological sensations of lucid dreaming and nightmare. I have approached my project from the perspective of an auteur as my work is experimental, bringing together a fusion of “high” and “low” art forms and I am intending to strive for effects and explore themes that may have little appeal to a mass audience. The outcome of my research is a 3D digital animated film entitled “The Window” and a series of still CG images which are expressive stand alone works independent of the film. I chose the title ”Window” for its simplicity as it seemed to capture the thematic scope of the film and harness its many disparate visual elements born out of experimental digital sculpting. The etymology of the word has its roots in the Old Norse “vindauga”, from vindr “wind” c. 1200, literally “wind eye”. It connotes the primal psychological landscapes of works such as Ted Hughes’ “Crow” and “Gaudete” poems, the elemental energy of Joseph Conrad’s “Lord Jim” and evocative landscape photography of Fay Godwin which were early sources of inspiration in my explorations of graphic literary depictions of “landscapes of the mind”. My protagonist appears to be searching for something both outwardly in the realm of the metaphysical space and and inwardly into the domains of the psychological and spiritual.
Achieving my Learning Agreement research objectives
My research objectives were broad but this was by necessity as there were many influences in film, literature and sculpture with a commonality of existential and psychological themes I have intended for years to synthesise into a personal expressive work. Ultimately I feel I have largely achieved my research aims in producing a worthy animation artefact which contributes to the niche realm of adult animation in exploring psychological themes with a more impressionistic “poetic” approach, mixing comedy with mature themes and atmospheric suspense. Although there are technical “rough edges” to my work which would benefit from further intense focus “sculpting in time and motion”, the deadline necessitated the discipline of a finished product. My supervisor Andrew Love has constantly made me question my decision making in a challenging but positive way, resulting in constant revisions and reinvention throughout the Masters programme. I feel the thematic scope and graphic design of the resulting piece although “roughly hewn” in places and lacking post production techniques I had spent considerable time researching, is for the most part a considerable step forward in my artistic development and realises the objectives of my Learning Agreement.
Evaluating further research into dreamstates and narrative
By following key research objectives in my Learning agreement to explore multidisciplinary works of art relating to psychological, expressionistic and existential themes, my project has evolved into an audio visual impressionistic open text. On the surface I have adopted visual motifs of conventional popular hyperreal 3D animation but have attempted to subvert these codes and conventions to convey vibrant visual and auditory sensations of REM sleep, “a unique phase of mammalian sleep characterized by random movement of the eyes, low muscle tone throughout the body, and the propensity of the sleeper to dream vividly.” The film is rooted in condensed fragmentary “dream vignettes” I have maintained over many years on an online journal which bear some relation to the condensed narrative fragments of flash fiction. The psychological states of lucid dreaming and nightmare have a binary dynamic which appealed strongly as a narrative structural device for an animation, with potential for dramatizing complex psychological states and compelling shifts in tone. In lucid dreaming, the “dream protagonist” is a self-conscious explorer who can take control and engage interactively with their environment. On the other hand, nightmares are characterized by loss of control in which the metaphysical becomes unpredictable and malevolent. My challenge as outlined in my Learning Agreement was to convey this in cinematic form whilst at the same time developing my themes in such a way as to give the piece a semblance of narrative form.
A reflection on outcomes following research into graphic design
My graphic designs regarding character design and environments were fundamentally inspired by the work of Jean Giraud Moebius whose work I had aimed to research as outlined in my Learning Agreement. Giraud’s seminal graphic novel “The Airtight Garage of Jerry Cornelius” was created entirely spontaneously, depicting vast enigmatic dreamscapes which enable the spectator to dream. As a young man Giraud travelled to Mexico and the huge expanses of flat plains of desert left a profound impression on him. I was particularly keen to apply Giraud’s intuitive meditative approach to animation production, unthinkable in a professional animation context where rigorous storyboarding and planning are absolute. I also engaged in synthesising Giraud’s landscape motifs with my own explorations of space in my photography of the bleak natural terrain of the Peak District. Inspired by the possibilities of creating 3D collages directly in digital applications, I embarked on a more intuitive methodological approach of freeform creation, assembling 3D models in collage form resulting in a series of eerie dreamscapes both urban and natural. There was an element of risk regarding time management of the project as this change of direction occured in September of the final year, yet it was productive and the scenes gradually began to trigger a multiplicity of possible narratives. The designs were loosely based on elements of my own original photography in the Peak District which I combined through a gradual synthesis with the works of artists I intended to research indicated in my Learning Agreement. These included the primal abstract forms of Mark Rothko which at their most literal interpretations represent the cosmic planes of sky and earth, the desolate metaphysical dreamscapes of DeChirico and the evocative sculpture of Igor Mitoraj which draws on classical yet dreamlike figurative forms connoting a profound sense of mortality through the monumental surrealist ruins of lost civilisations.
How successful was my research into film art and sculpture relating to existential themes?
A key research objective outlined in my Learning Agreement was to research works of art exploring existential and psychological themes evoking dream sensations, which are often elliptical and fragmentary, and also investigate narrative theory to give the work context and elements of dramatic momentum. Following research into Kieslowski’s enigmatic haunting “The Double Life of Veronique”, one of many devices I explored was the application of recurring motifs such as imagery of windows, glass, retina and parallel auditory ambience evoking an atmosphere of meditation and trance. I was particularly drawn to a meditative scene in which Veronique, travelling by train meditatively stares into a glass sphere which inverts her entire world. I developed this idea by drawing connections with the “Great Chain of Being”, the Renaissance model of creation – a hierarchy extending from the lowest life forms to divine beings. Mortal beings are thus an synthesis of angel and beast placed midway between heaven and earth. Here I drew on my readings of Shakespeare’s King Lear. The play is suffused with symbolic recurring motifs of gods and animals in binary opposition. As King Lear falls from the pinnacle of power to the status of a homeless vagrant (“man’s life is cheap as beasts”) there is a suggestion that the gods themselves may have fallen, or alternatively are disturbingly possessed of bestial qualities. This thematic motif became central to my film as a structural device – resulting in binary imagery of angels and insects.
How useful was my research into narrative theory?
In realising a key research objective I drew heavily on the narrative research of Claude Levi Strauss who emphasises the narrative power of oppositional forces in generating conflict. My protagonist’s earlier encounter with a fly is alluded to later through the ariel shots of David appearing no bigger than his nemesis in the mysterious hotel he checks into at the mysterious desolate ice bound crossroads of his journey. Here I had to be cautious not to adopt too literal an approach as my main objective was to capture the elliptical subconscious sensations of dreaming which can resonate on a level beyond language. I relied for these sequences on my own impressionistic abstract dreams of animals and insects which resonated strongly in the work and abandoned a playful homage to Kafka in which my protagonist discovers an enormous flea in his hotel bed. Instead I developed a bizarre dream in which I was chased by a levitating fish for a comedic interlude. My readings of Kafka resulted in a more focused attention to atmosphere. I hoped to to visually capture the eerie sensations of “dream logic” and his perspectives of existential absurdity by basing the narrative around a more detached protagonist. My character David’s motivations are deliberately ambiguous to create enigma – after all it is not possible in the real world to read the mind of a stranger. So I felt more confident rejecting any notion of an omnipotent narrative approach, or internal monologue. The notion of going back to silent cinema also appealed and I was encouraged to see a vague likeness to Buster Keaton, particularly in the deliberately ludicrous yet hopefully also poignant ending in which the protagonist suspended in a dangerous limbo clings for dear life to the foot of an motionless levitating angel.
In line with my Learning Agreement objectives at times my protagonist appears to be in control of his world as if “lucid dreaming”, at others, his reality becomes chaotic unpredictable and confused. As Bertrand Russel once stated: “I don’t want knowledge, I want certainty”. Dreams can often be a remarkable brew of self fulfilling stability and terrifying chaotic nightmare. As Jung proposes in my readings of “Man and his Symbols” many dreams are a manifestation of wish fulfillment both subconscious and self aware. This avenue of research resulted in the introduction of an underlying Faustian motif as I hoped to represent aspects of Jungian theory with a universal classical metaphor of some kind. The Faust myth has strong links to the Perhaps the protagonist hopes to make a pact of some kind. Can he transport his suitcase of worldly possesstions to the next world along with his soul? I hoped to compel the spectator to follow his progress even though he is not particularly sympathetic, a feature of Jan Swankmejer’s viscerally shocking and disturbing “Faust” which served as a creative touchstone throughout my production despite my more conventional stylistic design. Swankmejer positions the reader to follow the protagonist’s clearly dangerous trajectory into a forbidden world. Perhaps we are subconsciously rooting for him as the notion of escaping the bounds of mortality may seem appealing. On the other hand the protagonist of “The Window” may be lost. He is given a key by the Mephistolian hotel manager which may or may not provide safe refuge. One means of engaging the spectator with a remote or unsympathetic protagonist is to introduce elements of dramatic irony. This can create a dynamic, and elements of humour to diffuse the often serious tone. Thus in one scene David angrily swats a fly on his nocturnal tram journey, yet ironically appears to repeatedly beat his own grimacing mirrored self.
Reflections on worldbuilding and Japanese Anime
Researching the cinematic narrative persectives of Mamuro Oshii was especially productive as he argues for a more meditative and cerebral approach to cinema, placing considerable emphasis on the importance of atmospheric devices and tonal shifts in the narrative. Oshii’s work conveys a profound sense of his passion for worlbuilding. The measured pacing of his camerawork resonates with the early work of Ridley Scott, particularly Bladerunner, a major influence on Oshii and Japanese Anime in the last three decades. Oshii’s perspective that popular film fails to evoke a sense of wonder by not allowing the audience time to immerse themselves in the mise en scene. The perennial focus on rapid editing and the progress of the protagonist is a missed opportunity. I adopted a slower pacing of editing particularly in the earlier sequence hoping to amplify the sense of wonder
Audience positioning was important to my investigations. A key objective I believe I have on balance achieved following early test screenings of sequences of my film is to steer clear of literal meaning, an approach adopted by all the artists, film makers and writers I have researched. One member of my focus group Phil Wymer, Sheffield College Lecturer was intrigued by the protagonist’s apparel and belongings: “What’s important about the [character’s] hat? And what’s inside the suitcase? He never lets go of it.” Professional animators and employees of Newtek Lightwave responded with positivity to a scene I uploaded onto the online Facebook community David Lynch has famously stated “I like to see what the audience bring to the table”. Following this Barthsian approach has been rewarding resulting in a text which I hope a niche animation audience may derive some pleasure in the challenge of decoding. I have allowed intertextual elements to develop and synthesised key influences I have researched outlined in my Learning Agreement and Reflective Blog. The subliminal references to the Faustus myth following research into the creative tour de force of Jan Swankmejer’s “Faust” which does not compromise in shocking or surprising the spectator are I feel dramatically successful. There is also a short sequence which alludes to the seminal “ultimate trip” sequence of Kubrick’s “2001: A Space Odyssey”
- Discussion and conclusions
Extent to which objectives have been fulfilled; contribution to knowledge discussion and conclusions strengths and limitations of research; new knowledge to existing research; recommendations for future research
My artefact is the result of experimental methodology in my practice based research. It differs considerably from that of professional animation houses which function along a rigorous production line of honing a story concept into a clearly defined Vogler styled narrative before any models or sets are constructed. In the second year of my programme subverted this approach by adopting a collage based approach and applying Stanley Kubrick’s narrative theory to animation outlined below.
My action based preparatory work initially consisted of mind mapping concepts in 2D on hand drawn index cards and producing storyboards based on my dream journal, photographing natural landscapes in the Peak District. I complemented this by improvising preliminary models along a focused theme of “dreamscapes” and evaluating feedback by regularly posting designs on two forums frequented by professional film makers animators and 3D designers: Newtek Lightwave forum and Facebook page. The importance of feedback from my supervisor Andrew Love and animators from the digital community cannot be understated in giving the work a greater sense of authority than I could have managed on my own. For example, I was surprised to receive extremely positive feedback from the community including Lino Grandi, Newtek artist and software engineer and detailed evaluations of early concepts for the set designs and mise en scene by respected volumetric and lighting expert Prometheus.
One early design which members of the community showed particular interest in was a 3D design entitled “Crowman” inspired by the psychologically dark apocalyptic early 70s poetry of Ted Hughes, synthesising this with a steampunk aesthetic with the purpose of exploring a theme of nightmare. Some online comments suggested the designs were genuinely nightmarish with aesthetic potential and comparisons with the work of HR Giger which indicated I was following the right path. Although I later abandoned the design, this approach evaluating online community feedback enabled me to develop my 3D world building to a more advanced level by experimenting with instancing, fractal textures and lighting techniques. After a year of experimentation I was concerned my project still lacked cohesion. My storyboards were initially informed by the narrative structure of Vogler which felt stifling and formulaic: A reluctant protagonist embarks on a quest which tests him to the limits of endurance. His dramatic descent into the “dragon’s lair” is followed by claiming the symbolic elixir and negotiating a dangerous road home. This felt too predictable.
A creative breakthrough resulted from applying and testing the narrative theory of Stanley Kubrick and in the editing process amplifying tonal shifts in mood by testing out the cinematic Deriba of Mamuro Oshii. I found the former director’s approach, rooted in his visual sensibility liberating after initial srory obstacles and false starts throughout the first year of my Masters programme. From Diary of a Screenwriter blog: “Stanley Kubrick insisted that a feature film can be constructed from six to eight ‘non-submersible units’. A non-submersible unit is a fundamental story sequence where all the non-essential elements have been stripped away. These units would be so robust and compelling that they would, by themselves, be able to keep the viewer interested. They would contain only what is necessary for the storyline. And when joined together they would form a greater narrative.” From this point I abandoned my unresolved storyboards which relied on a formulaic narrative arc and began working quickly directly in 3D applications, creating my own non submersible units which I intuitively felt had significant narrative potential. Refreshingly, it also meant I was moving in a less literal direction. The final narrative resulted from the intuitive effect of the spectator “filling in the blanks” and making an imaginative leap from one non submersible unit to another.
Regarding evaluation of workflow and pipeline, I animated in Maya and rendered in Lightwave to take advantage of the former’s deep animation tools and the latter’s fast flexible scan line renderer. A key advantage of Maya was undoubtedly the responsive animation environment and the immediacy and flexibility of manipulating character controls simultaneously within the Graph Editor during playback. A drawback was the additional time it took to transfer point cloud geocache data from one application to another, but the import tools in Lightwave have been updated and streamlined to work quickly in a production setting. Chronosculpt, a simple but revolutionary animation sculpting tool, allowed me to make further revisions. I used the tool not quite as it was marketed – to correct mistakes – but specifically to add more expressiveness to characters’ faces. I have contributed to online debates and feature requests to further develop the application’s power.
Although I have met the deadline in producing a seven minute animation, I have inevitably had to make artistic and technical compromises. A flaw in my approach was to devise too many non submersible units to the point of being overstretched. The vast majority of the completed shots consist of raw renders with no time for promising post production techniques I had tested regarding depth of field, saturation and atmospheric mist and which are represented in my portfolio stills. There are inconsistencies such as underdeveloped expressive movement of facial features and limbs. In one example three characters are present in the same scene. The central figure is carrying a huge coffin like box into the frame, which he drops to the floor in an absurd slapstick manner. As the other two characters are standing still yet watching the scene unfold, I introduced some subtle noise to infuse the armatures with flexibility – enough I hoped to guide the spectator’s gaze to the central character’s comically exaggerated exertions.
One important discipline of time management is to prioritise in the production schedule, so the overall sequence of the non submersible units was constructed first, recorded on coloured postcards. This was followed by blocking out animations in each scene. The third and final process required “bulldozing” through the scenes in linear fashion, fine tuning and testing before finally rendering. This process took a full 8 weeks of non stop rendering, whilst simultaneously editing the film and applying sound design.
One area I found particularly interesting was the challenge of animating stationary characters. It’s a technique with considerable potential for further research. The subtle movement of the fingers is an effective device to convey thought, enigma and maintain verisimilitude. However, on reflection, there was a need to go beyond this in these sequences and animate more extreme poses to reflect their surprise at a ridiculous spectacle. In this scene the response of my protagonist still looks wooden and stilted and some walk cycles are successfully fluid whereas others functional and mechanical.
I built dual rigs in both Maya and Lightwave applications in order to render sequences on the fly such as close-up facial expressions but regret not making greater use of them. I set up a shot of a character staring out of a tram window. Yet I had to drop days of work due to the surprise realisation the sequence detracted from the momentum of the narrative, strangely dispelling something of the enigma of the character. As the story panned out, there was a suggestion that the character might appear to be lost and alone – whilst in reality know exactly where he was going. I was unsure of what an audience would make of this, with little time to test this out ahead of the deadline as I spontaneously crafted the narrative. Yet intuitively I felt it strangely invested the tenuous dream logic of the narrative a compelling quality. The smaller, more remote and vulnerable I made the character, the more of a ‘presence’ he seemed to have. Perhaps this was due to the power of binary oppositions in a narrative, a reflection of the Levi Strauss narrative theory. A visual dynamic was created by consistently framing a tiny character amidst enormous expanses of desert – like terrain. This is a positive outcome as a key aim of my project was to explore the effect of giving greater emphasis to landscape and environment, influenced by the Eastern anime of Mamuro Oshii and the European sequential art of Jean Giraud. However, the style which developed did not reflect my original reference points, with more subliminal 1920s Hollywood silent film references filtering through.
The mise en scene gradually began to reference elements of silent Buster Keaton films, with copious mid and long shots. This went against my prior instincts regarding the compositional power of close ups in conveying cognitive processes such as motivation , but the technique seemed to imbue the production with a certain visual consistency and eccentric humour. So my additional work in constructing head models with closeup facial expressions for reaction shots came to naught. I still regret not focusing more on the expressive power of the human face in maintaining the momentum of the narrative. Simulating emotions are a keen interest of mine and an area I researched in the early stages of my project, defining basic primal expressions based on the work of Scott McCloud and the templates of professional animation houses. As the film developed I left this work behind, increasingly focused on the atmospheric and tonal resonance of landscapes as the direction of my research into mise en scene and deriba became more fruitful.
I initially adopted a traditional approach to character animation, firstly defining exaggerated key poses which could then then be more subtlety refined following the blocking procedure. I tended to keframe all the main character controls at crucial poses in order to work quickly and avoid “drifting” limbs. I avoided stepped key framing with the intention of establishing the timing of motions from the outset, with the intention of gradually polishing the movements into convincing performances. An illuminating discussion on the 11 Second Club forum informed this decision. There are wildly divergent views regarding technique. If the timings need adjusting it is a simple matter to “stretch” the key frames in both Maya and Lightwave. A professional reference text for my production was the writing of Kenny Roy. “How to Cheat in Maya”. Despite its self consciously humorous title, the book is a profoundly deep exploration of the importance of mastering simple techniques well in order to maximise the dramatic impact of character animation.
The most successful sequences were based on original filmed reference. I regret not investing more time in this approach to referencing which is commonly used in animation houses, but my intention had been to keep the character animation minimal consisting mainly of walk and run cycles in order to focus on my cinematic approach to representing vast regions of space. Though aware of the vital importance of studying real world human performance, for this project, it was a personal and perhaps misguided whim in my first year to see how much action I could animate purely from imagination. I underestimated the dramatic authority and impact of subtle physical movements which are exceptionally difficult to anticipate unless the animator is a gifted performance artist. The performance of a professional actor can bring unexpected nuances which considerably enhance characterisation and narrative flow. I spent considerable time eyeballing my reference video when animating my character awakening and was astonished at the huge number of frames in which virtually nothing pronounced appeared to happened – despite the compelling presence of the actor who confidently fell into the role projecting the persona of a lone traveller on a late night tram. Yet following hours of work for a sequence which I almost abandoned as a tedious uncreative exercise, the resulting animation sprang into life due to micro movements of features such as fingers and eyelids.
The illustrations below summarise aspects of my methodological approaches.
I followed an experimental methodology in my practice based research, initially mind mapping concepts in 2D on hand drawn index cards and storyboards based on my dream journal, photographing natural landscapes in the Peak District and creating preliminary models and evaluating feedback by regularly posting designs on two forums frequented by professional film makers animatrors and 3D designers: Newtek Lightwave forum and Facebook page. I was surprised to receive extremely positive feedback from the community including Lino Grandi, Newtek artist and software engineer and detailed evaluations of early concepts for the set designs and mise en scene by respected volumetric and lighting expert Prometheus. One early design which members of the community showed particular interest in was a 3D design entitled “Crowman” inspired by the psychologically dark apocalyptic early 70s poetry of Ted Hughes, sythesising this with a steampunk aesthetic with the purpose of exploring a theme of nightmare. Some comments suggested the designs were genuinely nightmarich with aesthetic potential and comparisons with the work of HR Giger which reassured me I was following the right path. Although I later abandoned the design, this approach evaluating online community feedback enabled me to develop my 3D worldbuilding to a more advanced level by experimenting with instancing, fractal textures and lighting techniques.
Alternative Inquiry Paradigms – my methodology was primarily Positivist: experimental and manipulative.
Production Workflow: A flexible approach to a Maya and Lightwave pipeline
Lightwave Modeller -> ZBrush -> Maya -> Chronosculpt -> Lightwave Renderer
Models created in Lightwave Modeler and ZBrush are textured in ZBrush, then rigged and animated in Maya. The animated models are Geocached and transferred to Lightwave via the FBX file format for rendering. Above, the lighting is adjusted in the extremely fast VPR window to the right of the UI.
My workflow has involved the use of multiple applications including Maya, Lightwave, ZBrush. The majority of models such as the character of David were initially created in Lightwave Modeler due to its fast intuitive toolset which facilitates both soft and hard modelling in an efficient UI. The Magnet tool is especially powerful for crafting facial features from a base mesh. I adopted a holistic approach to the production, avoiding being “boxed in” by apportioning say weeks of time to purely modelling or rendering as I discovered that mocking up simple scenes with even a half finished model would spark story ideas or ideas for scenes which complemented the underlying narrative. As this was an experimental “auteur” production, not a production line commercial animation, I had the luxury to research in this spontaneous way, though I was well aware of the risks involved. The narrative could evolve into something tortuous and unfathomable and my time management could disintegrate if I wasn’t careful to reserve enough time for rendering, editing and post production.
It was not possible to be completely spontaneous however, as some animation procedures are so technical they require considerable forward planning. Once I’d decided on scenes which when juxtaposed appeared to spark a narrative I worked hard to develop them from static vignettes to compositions that moved in time and progressed the narrative. Working in animation without the advantages of a render farm can feel like thinking in “slow motion” – it can take weeks of work to generate a few seconds of footage.
Models created in Lightwave and ZBrush are textured in ZBrush, then rigged and animated in Maya. The animated models are geocached and transferred to Lightwave via the FBX file format for rendering. Newtek, creators of Lightwave have made the opening of Maya geocached FBX files extremely simple through a streamlined UI which facilitates the association of models with animated point cloud files. I did this manually as it worked reliably – static models sprang into life within seconds. There is an auto function which can potentially speed up the process further.
Why choose Lightwave for rendering?
I chose the Lightwave renderer over Maya’s Mentalray because at the time of planning the project, Newtek’s highly advanced rendering VPR technology gave virtually instant rendering feedback in scenes, a huge advantage regarding time in setting up lighting rigs and establishing mood and tone. Rendering controls for antialiasing, depth of field and motion blur are also represented in the VPR window. The main renderer allows for sophisticated fine tuning which helps planning ahead and keeping to a tight schedule. There is always a trade off between time and quality and this feature made working Another advantage of Lightwave is the very flexible and accessible nodal texturing which is streamlined and enables changes to be made to individual models extremely quickly. Rendering technology is proceeding at a prodigious rate. Maya now comes equipped with Arnold, a physically accurate renderer which also provides an “immediate” representation of a scene.
Surfacing in Lightwave can be as simple or complex as desired. The nodal option allows for sophisticated application of multi layered texture and normal maps with considerable scope for experimentation.
Lightwave’s Genoma autorigging technology was also used to bypass Maya for certain shots – so a rigged character could be used directly in the software for facial expressions. I would have animated the whole film in Lightwave were it not for Maya’s highly flexible Graph Editor, fast response times and autotangents which give very useable generic splines producing smooth easily adjusted movements.
Once an exported Maya animated scene is opened in Lightwave, for lighting and texturing, the instant feedback of the VPR renderer facilitated subtle adjustments in texturing and lighting. In the case of the tram image above, clamped falloff was applied to an area light to add some atmospheric shadows. There were still some rough edges in the geocached animation but the majority of mistakes could be cleaned up in Newtek’s Chronosculpt geocache editor quickly and new subtle facial expressions added on the fly. This facility to further develop point cloud animated files was previously unavailable and so animations at this stage were effectively “set in stone”.
Only a small selection of tools is necessary for organic character creation in Lightwave. The David character was built primarily using Knife, Extender, Move and Magnet.
Morphs were added to the David character and further edited in Maya using the new advanced Blendshape system which allows, for example, facial expressions to be created directly within a scene for when they are needed. Despite the advantages of constantly improving technology, because of time constraints, I reluctantly had to keep facial expression work to a minimum in order to meet daily targets animating sequences to maintain the fragile surrealistic narrative and render out the scenes.
Hard surface modelling
Lightwave has a powerful economical toolkit for hard surface modelling and I made good use of a particularly flexible and highly regarded plugin in the Lightwave community, LWCAD which allows the user to work in NURBS but toggle back into polygons where necessary. Both polys and NURBs can be rendered in Lightwave which is a particular advantage, as the latter allows for industrial levels of precision if necessary.
Original model of an Eastern European inspired building. I avoided a futuristic or recognisably urban mise en scene as I wanted to evoke something of a Kafkaesque tone and capture elements of my own experience travelling through Poland. For some reason, the multiple windows crammed into a narrow structure create a sense of enigma. Who lives here and why?
The original hard surface building above was created quickly using native Lightwave tools, particularly Bevel, Knife, Extrude, Boolean and Thicken. At a later stage in my project I accessed the NURBs tools in LWCad. This powerful plugin can be especially useful for creating architecture quickly as in addition to its precise CAD splines and powerful Boolean features, it allows for customisable preset windows doors and roofing.
To maintain momentum and avoid the model disappearing into a virtual filing cabinet, I spent a little time “dipping” the model directly into the colour, shadows and light of the renderer and experimenting with a simple scene. This was advantageous despite the dangers of becoming distracted from the modelling process as a collage approach generated more potential story ideas.
VPR allows for experimentation, constructing scenes spontaneously. Here, the simple juxtaposition of sky and earth combined with a skydome creates a strange surreal effect. These scenes are particularly influenced in their composition by Terry Gilliam, Edward Hopper and DeChirico. All these artists are interested in framing their subjects in the context of the enormity of space which evokes a sense of absurdity.
The eerie stillness of DeChirico’s vacant terrain foregrounds the architectural structures imbuing them with a sense of absurdity
Terry Gilliam’s flat expansive landscapes also foreground the characters, amplifying the composition’s rich blend of hilarity and nightmare.
Overall the pipeline I adopted allowed for considerable flexibility and a relatively fast workflow which reflected methods used particularly in smaller animation houses. According to the Lightwave 3D Group website: “Major Studios and Post-Production Houses spend years assembling their custom pipelines, often at great expense and requiring large numbers of technical staff to maintain. Those complex pipelines are perfect for companies with hundreds of employees, but can be overkill for the majority of studios around the world with 40 employees or less.” Ultimately I used Lightwave Layout for rendering as it gives rich results with strong gradations and contrasts of light and shade. It is also extremely fast when setting up scenes, especially in regard to texturing and rendering and a liberating platform for experimentation with mise en scene.
A 3D collage approach to developing spontaneous and unexpected narratives
Other models in the urban scenes I assembled “collage style” were created by “Frankensteining” and editing old found copyright free models I have collected over the years. I have tried to credit authors wherever possible but ultimately it was advantageous to build “from the ground up” It can be a false economy to resort to found meshes due to variability of quality and ambiguities regarding copyright. Sketchup is a good source of architectural models but they can be more useful for reference. The quality is variable and many models are erratically constructed with non planar and overlapping polys which can take considerable time to repair and cause problems in texturing and rendering.
Setting up the mise en scene. This scene was constructed blending elements of an original basic gothic city model juxtaposed with the edited façade of a found building object, reconstructed into a mysterious imposing building. All the elements were “dirtied up” and wintery colours applied to give a hyper real effect though photorealism was not an aim – I was more interested in evoking atmosphere.
Character animation in Maya: Evaluating approaches to keyframing
The majority of scenes were animated in Maya. The character animation tools are powerful and deep. The new Geodesic skinning facility minimises work spent painting weights to ensure sound character deformations. The refined blendshape capabilities enabling the creation of say morphed facial expressions on the fly have greatly improved workflow.
For character animation the default autotangents in the Graph editor are effective in conveying naturalistic organic movement using a minimum of keyframes. Although Graph Editor splines in Lightwave can be shaped precisely using Beziercurves, an equivalent to the autotangent feature is not present as of 2017. Nevertheless, as a backup, and to evaluate the two systems, I rigged the character David in Lightwave using the powerful Genoma autorigger to animate incidental scenes and facial expressions, taking advantage of the efficient and immediate morphing capabilities.
This second model of David, animateable directly in Lightwave meant that I could focus more on facial expressions and closeups. Regrettably, time constraints inhibited any further work, so the vast majority of scenes were animated in Maya.
To keep the animation manageable but fluid I avoided stepped keyframes; blocked out maximally poses and gradually refined the tangents in the Graph editor by adding keyframes.
Key poses were defined with close reference to Maya’s Graph Editor and blocking movements were achieved by using Autotangents combined with stretching keyframes for timing using the region tool. I avoided stepped tangents reflecting on my techniques after reading an illuminating debate on the 11 Second Club and worked continually using Playblast to refine the movements.
Animation debates: Discussion relating to Maya Graph Editor
Ryan Hagen: There’s a lot of high profile animators who never touch the graph editor. Needed? No. Helpful? Sure.
There are extremely illuminating and constructive debates which are worth reading on the 11 Second Club relating to animation technique. I am a strong advocate of, to paraphrase an animator “living in the Graph Editor” when animating. It is extremely useful, in fact essential to build up a “vocabulary” of splines – the varying shapes of course can translate into wildly differing movements. To summarise:
- Developing a spline shape vocabulary considerably enhances an animator’s understanding of a range of expressive movement
- Subtleties of motion can be eyeballed in the viewport but can be finetuned in a highly detailed way.
- It is possible, with a low poly proxy mesh to edit splines whilst the animation is playing.
- The Graph Editor allows copying and pasting keyframes for repetitive movements such as walk cycles, but these can be subtly adjusted.For further reading: http://ww.11secondclub.com/forum/viewtopic.php?id=445
Discussion relating to Keyframing: Stepped splines and autotangents
My own approach after much trial and error has been to follow the following procedure:
- Be clear about the character’s motivation, thoughts and feelings. The theory of Ed Hooks relating to performance is important here. This was one of my weaker areas and so I gave it some attention.
- Aim to reflect these responses in the character’s body language
- Break down the sequences of thoughts and character’s responses into clearly defined “bocks” posing the character at key points on the timeline e.g. Frame 1; 5; 9
- Avoid stepped frames and use autotangents. The downside is that they will represent movement as a little “floaty” to begin with but can gradually be refined.
- Timing can easily be adjusted in the Graph Editor through “stretching” keyframes.
- The keyframe blocks can gradually be refined to introduce subtle movements across the clearly defined but artificial temporal boundaries.
- The blocked keyframes ensure an animator doesn’t get lost in a “sea” of keyframes.
- Stepped keys give an animator an overview of the whole scene allowing subtle adjustments which can have a huge impact.
Darryl Vasquez’s contribution to the discussion is particularly illuminating and reflects my own perspective on animating in stepped mode: This is a much debated topic because it is up to the animator and what suits him/her. I just heard from one of the instructors at ianimate (Ted Ty). The questions was just like yours, “what is the best way to go from stepped to spline”… Ted’s answer was “don’t do stepped”, haha. I was a self taught animator before going into ianimate and only went through Jason Ryan’s tutorials so his workflow was all I knew. Know being in a school with all these instructors he see that there are so many ways to do things and one of the most popular ways that people are animator is blocking in spline.
the thing that makes going from stepped to spline so difficult is that you get a way false sense of timing seeing a pose for 4, 8 maybe even 20+ frames, this is unrealistic because that pose will only show for 1 frame in normal playback. doing a spline blocking (splocking) approach is difficult if you are really knew to animation and if you don’t probably plan like others have said. But I think what others have said about proper planning is the key… and one thing to learn is that animation is hard and muscling through sections of animation is apart of the game and your [sic] not the only doing it.
An especially useful site for developing an understanding of splines and their uses for animation in the Maya graph editor is here: Create 3D’s Graph Editor Funfamentals. http://create3dcharacters.com/maya-animation-graph-editor-fundamentals/
I’ve found this invaluable as a reference tool for developing an understanding of timing and a small vocabulary of spline movements.
My own animation I would evaluate as purposeful and for the most part it maintains the momentum of the narrative pretty well. I regret having to pull away from my explorations into facial expression due to timing constraints and feeling the pressure to place less emphasis on R&D and begin the production. I aimed to avoid the “floaty” erratic animation which often occurs in productions of this type by revising the splines continually. In some cases splines were completely edited with keyframes to pin down the movement – at other times expansive movements, such as the character flying used minimal keyframes. My next project will see much more of a focus on facial expression as a means of maintaining the compelling aspects of narrative.
An index of essential spline vocabulary: Create 3D
More reflections on organic character modelling: key techniques to build up speed and expressiveness
Ed Catmull published the first computerised polygonal modelling following his research at the University of Utah in 1972s. It’s important to be mindful of this extraordinary milestone in the development of computer graphics as the huge advances in hardware and software over the last 4 decades have blessed creators with a previously unimaginable range of creative tools. However, it is worth studying the approach of 3D pioneers such as Timur Taron Baysall and Sven Stahlberg, who pioneered 3D modelling from the 90s onwards and focused on economy of technique using an extremely basic toolset.
Unconventional creative and spontaneous techniques
Minimalistic polygonal character modelling
For the most part my approach to modelling consisted of creating base meshes of characters and sets in Lightwave transferred in FBX format to ZBrush for UVing, texture mapping and surfacing, which were then transferred to Maya for animation – then back to Lightwave for rendering.
There is frequently heated debate between brand loyal software communities regarding the most effective basic polygonal modelling program but toolset fundamentals found in the major applications Maya, Cinema 4D and Max have become essentially homogenised since their advent in the 1990s. Newtek have made only minor changes to their Modeler in the last 10 years, signifying that basic polygonal modelling has plateaued, although the development team have acknowledged the need for development of software which will handle far higher polycounts.
After many years of exploring and trialling software for character creation, in my view too many tools can actually impede creation and in many cases “less is more”. Highly innovative tools such as 3rd Powers plugins and Lightwave CAD can fulfil a need for extremely specialised modelling such as advanced booleans, but for the most part only basic tools are required for good results creating basic digital marionettes. From the mid 90s some artists such as Timur “Taron” Baysal were creating extremely expressive sophisticated models using only a small number of basic 3D tools, a decade before Pixologic ZBrush became more firmly established with the release of version 3.1 in 2007.
A review by Crossbones 2004, on CGSociety.org of The Secrets of Organic Modeling Lightwave Modeling Techniques with Taron is particularly insightful outlining modelling techniques which are still effective today for polygonal modellers.
In the usual warm and friendly way,Taron starts this DVD by introducing himself in a humorous manner. This DVD was created in real-time, which I find more advantageous, rather then watching a sped up version of the artist racing through what he’s talking about.
Timur Baysal’s highly progressive approach to modelling in 2004 using a minimal toolkitTaron starts by removing aspects of Lightwave’s interface which he feels impedes people by getting them hung up digging through sub menus. He limits the number of tools he permits to be visible in order to achieve his results. While Taron prefers to use Lightwave, he encourages you to find similar features in your modeling package and map them to hot keys.
What can we learn from Timur Baysal’s highly progressive approach to modelling in 2004 using a minimal toolkit?
Taron starts by removing aspects of Lightwave’s interface which he feels impedes people by getting them hung up digging through sub menus. He limits the number of tools he permits to be visible in order to achieve his results. While Taron prefers to use Lightwave, he encourages you to find similar features in your modeling package and map them to hot keys.
Less is more: basic toolkit for polygonal head modelling
By customising a menu of a small but essential tools, the creation of low poly characters could proceed efficiently. My own approach to character modelling using a basic toolset has been heavily influenced by this approach. Crossbones continues in his review to summarise these early techniques which are still powerful today:
• Selects a group of polygons where you wish add detail
• Smoothshifts (extends) that area
• Use the stretch tools to push them into position
• Drag points to further position to your liking, defining the surface
• Lastly, study the area you just created to define the edge flow in the topology to create the geometry you desire, and repeat.
From this point, Taron then focuses on what the main draw of his DVD is: How to approach modeling organic structures like a human head. He chose this because a head has all the curvatures and details which translate to modeling any other organic form. He reveals advice about anatomy,design, modeling for animation and displacement. We also see again how by starting from a mere primitive box can turn into a highly elaborate yet necessarily simplistic model of humanoid head, all of which he accomplishes in 2 hours of real time.
Of particular interest is Taron’s approach to facial modelling. Although Taron was working primarily in the Perspective view, the emphasis on character profile and silhouette is especially important in developing form and personality. All Taron’s work has a signature gothic style which is a considerable achievement in the often homogenous world of 3D.
According to Crossbones: He focuses a good deal of time generating the silhouette of the head. In minutes he roughs out something organic and humanoid, extending a box out to form the neck, shoulder area, and jawline, all using very little geometry. Taron wants to get all the features in place then tweak, add and manipulate the geometry to further reveal the character within.
The power of Taron’s creations stems from his close attention to his expressive modelling and subtle rendering of the eyes. It is useful to any character modeller to explore research in Communication Studies regarding this fundamental facial feature. Certainly Pixar are well known for their smoothly rendered iconic design of eyes, as well as the familiar “sparkly” motifs of Manga artists.
The psychology of facial expression
Draft render of my protagonist. The basic poly model is a low res mush with sub surface scattering applied to create a surface akin to skin, though avoiding photorealism. Morphs were applied subtly to the eyebrows and mouth. There is some light “leakage” in the nostrils and eyes – making the model double sided fixed this issue.
The iconic Pixar eyes: significance?
The eyes of a character are especially important as the vision of a spectator is instinctively drawn to them; a fundamental feature of face to face communication. In an article on the Psychology of Eyes in the British Psychological Society Research Digest, The Psychology of Eye Contact, Digested, Christian Jarrett explores the impact of eye contact in face to face communication:
As adults, locking eyes with another person immediately triggers in us a state of increased self-consciousness. Researchers showed this by asking participants to rate their own emotional reactions to various positive and negative images, some of which were preceded by a face staring straight at them, others by a face with gaze averted. Participants had more insight into their own emotional reactions (which were measured objectively through the galvanic skin response) after they’d made eye contact with a face… In fact, eye contact is such an intense experience it even seems to consume extra brain power, making it difficult to perform other challenging mental tasks at the same time. This year a pair of Japanese researchers tested participants on a verb generation task while at the same time they looked at a realistic on-screen face that was either making eye contact with them or had its gaze averted. Making eye contact impaired the participants’ performance on the hardest version of the verb generation task, presumably because it consumed spare brain power that might otherwise have been available to support performance on the verbal task.
For animators this is significant because the spectator’s gaze is likely to prioritise the eyes, decoding nonverbal signals before exploring other facial features. It follows that in the early stages of modelling a face, it’s important to focus on the size and placement of the eyes and consider what aspects of personality they will reflect. In Crossbones’ review of Taron’s technique,
For myself, the toughest part of approaching faces has always been modeling the eyes. Taron gives advice that upper eyelids should run over the lower lids. The skin above the eyelid is another challenge, to finesse this fold nicely,Taron to relieve stress, suggests having you come back later when the rest of the head is more complete which I appreciated because it is very much how a painter works an entire canvas, rather than getting hung up massaging solely one location.
Taron’s pioneering organic head modelling process using basic polygonal modelling tools
Taron has a holistic approach, constructing a head by focusing not simply on form but also polyflow and imbuing his designs with extraordinary personality and an eerie sense of enigma:
Taron takes a moment between each step and makes large modifications to make the model focus into form. The process repeats, promoting experimentation yet always keeping control of where the geometry is going. Taron’s idea is to have the edge flow very homogeneous so that one area flows into another very cleanly. Modeling this way instantly shows the correct way in which the geometry is supposed lay. Problem areas are always addressed right away by spinning quads or collapsing areas and rebuilding them. It’s surprising how he defines so much with very little geometry.
An early creation of Taron, decades ahead of its time regarding modelling technique, design and rendering. Produced in Lightwave 3D.
I have adopted elements of Taron’s approach in my own modelling as the approach is more akin to sketching and allows for the potential of unexpectedly interesting spontaneous character designs. Taron’s approach is a far more intuitive and creative departure from the more traditional method of head modelling, mechanically building up a head poly by poly using a 3D backdrop as illustrated below.
Traditional approach to setting up polygonal facial modelling: effective as a mechanical exercise regarding precision defining facial characteristics but distinctly lacking in spontaneity.
TorQ’s basic polyflow topology: precisely economical,
TorQ’s Blender Artists template: a highly economical yet effective
model for facial creation which maximises potential for facial expression.
Taron has since moved on to Zbrush due to its immediacy as a simulated clay modelling application: “ZBrush exceeds any previous experiences, and invites far more elaborate explorations of form and expression of shapes.” In my own case, I have also followed this route, occasionally working directly in ZBrush or adopting a hybrid technique, using the template below devised by TorQ of Blender Artists as a guide to polyflow allowing maximal potential for facial expressions. This template has served as a superbly clear and economical approach to modelling a face for maximum flexibility animated in Maya.
The primary tool used here was the Magnet tool, equivalent to soft selection in Maya. Once the basic head template is established there is huge potential in using just this simple tool to say drag out the nose, forehead, cheeckbones.. Combining Taron’s approach whilst conforming to elements of TorQ’s template is a liberating and organic approach which allows for character creation spontaneously in the viewport.
A character head modelled for my film project with minimal toolset in a basic polygonal modelling app. This adopts aspects of Taron’s approach combined with basic theory relating to polyflow.
2. Sculpting in virtual 2.5D clay: The power of Zbrush
Head Space: Sculpture by Meats Meier showing the potential for spontaneous, expressive 3D work
Newer technologies developed by Pixologic such as micropoly modelling and voxel based programs such as 3D Coat – akin to sculpting in clay – are a major advance, allowing for incredibly detailed modelling and 3D texturing. As liberating and expressive as these programs are, for character design, it’s important to be mindful of keeping poly counts as low as possible on completed character meshes in order to keep response times immediate in Maya for rigging and animation. In teaching ZBrush at NTU I made it a priority to focus on efficient interchange of assets between programs to reflect commercial animation pipelines, explored in greater detail below.
The project gave me a good reason to further develop my modelling skills in ZBrush regarding character creation. I have used the application since first release in 2005 but the development of Dynamesh technology has enabled a truly organic style of virtual clay modelling. In earlier versions, the artist, although dealing with dense meshes consisting of millions of polygons.
Several technical developments have considerably increased the power of ZBrush. Prior to the introduction of Dynamesh technology, ZBrush’s simulation of modelling virtual clay was dependant on increasing the poly count of a mesh to extreme proportions in order to prevent the mesh “breaking” when for example extruding a character limb from a sphere. This still limited modelling options to importing a base mesh as a starting point. Dynamesh is a far more flexible and economical system as it retopologises the mesh with “micro polys” in an economical way when necessary. It further allows more meshes to be seamlessly blended together through the use of the Insert Multimesh tool, a process akin to modelling in clay.
Nottingham Trent ZBrush character creation lectures
Below is a simple approach to character creation I outlined in lectures at Nottingham Trent University. The ZBrush interface differs considerably to conventional polygonal modellers and frequently intimidates users familiar with Maya, Max and Cinema 4D. My approach was to focus on just a small number of tools for maximum efficiency and creative potential.
Basic head modelling in ZBrush
1. Define the basic head shape from a sphere, using the Move tool, symmetry applied. Mask the nose using a simple basic approximation. This can be easily edited later but helps the artist contextualise the facial zones.
2. Inverse the mask and pull out the nose using just the Move tool.
3. Using the Move tool set to large radius, define the basic planes of the face. Mask the eye sockets, blur the mask, Invert then drag the sockets with the move tool back into the head. Set the Mask tool to small, define the nostrils, Invert, then drag back into the nasal cavity.
4. Use Claypolish to sharpen the planes of the face as you work to prevent losing a sense of fundamental underlying shapes. Eyelids are created from simple spheres which are embedded in the main mesh using Insert Mesh. Slits in the eyes are masked, inverted and then pulled inside the eye socket then polished.
5. Create eyeballs as separate subtools from simple spheres
6. Manually retopologise drawing polys directly on the mesh following TorQ’s basic polyflow template.
Further reflections on creators and digital toolsets: William Vaughan
Having used a number of modelling applications since the mid 90s I subscribe to the views of William Vaughan outlined in a thought provoking article on the Pixelfondue website in which he reflects on the relationship between creativity, and digital toolset. Essentially William’s philosophy is that the challenge of 3D as both an expressive and commercial medium is about problem solving. Understanding and appreciating the process rather than misguidedly believing digital toolsets to be restricted will result in a more positive and productive outcome.
“In this industry, we are not modelers, lighters, animators, or compositors. The best title for what our job is on a daily basis is problem solvers. As production artists, we are thrown problem after problem, and we have to devise solutions to move on to the next phase in production. The next time you “wonder” if something is possible, say, “I bet it is; I just need to figure out how to do it.” Do that and I believe you’ll have far better results than giving up before you begin…
Much 3D animation involves variations on simulating reality – an artificial “smoke and mirrors” reconstruction. The finished artefact is artifice and so it really should not be a concern by what means the illusion is created as long as suspension of disbelief is sustained in the spectator. William’s perspective regarding digital creators feeling limited by their toolsets is particularly well expressed.
I tend to use the phrase “back in the day” all the time—which is surely a sign of getting older—yet I can’t help but explain to new artists the stuff we used to have to do to solve what seem like minor hurdles with today’s tools. Not having the tools didn’t stop us. When we needed a flag blowing in the wind and there were no cloth dynamics to be found, we simply ran a procedural texture through a segmented plane and called it a day, and at the end of the day (to use another overused phrase) what it is really about is solving each task with the tools and techniques that you currently have. Sure, the tools will improve and so will your bag of tricks, but you already have the things you need to accomplish today—not tomorrow!
Don’t get me wrong. I’m not saying don’t push for new tools and improvements from the software developers. I push for new tools all the time. What I don’t do is let the tools I currently have in hand stop me. This type of positive thinking and problem solving is what has helped most successful artists and studios flourish. Otherwise, studios with massive teams of programmers to write every tool needed for every job would be the only ones to play a significant role in our industry. What fun would that be?
Key production features of Lightwave 3D: still a contender in the 3D app wars due to its proficiency, speed, nodal tecturing and fast flexible renderer
Although I’m conversant with modelling in Maya and 3DS Max, I originally used Lightwave Modeller for freelancing due to its economical price point and powerful toolset which were designed for artists rather than CAD engineers. The modelling tools were significantly advanced compared to 3D applications in the mid 90s following the development of “MetaNURBS” subdivision surfaces, an organic hybrid of polygonal modelling and NURBS surfacing. A combination of simple tools such as Bevel, magnet, spinquad and merge poly was demonstrated by the artist Taron as a highly expressive yet economical approach to character construction in an era before ZBrush digital sculpting.
Development of Modeler has effectively plateaued as the Lightwave 3D Group placed more emphasis on Layout, the animation rendering and compositing environment. However the fundamentals are still in place enable a fast and economical workflow. The UI is highly accessible as it avoids icons and tools are both clearly identified in English and searchable via a menu index.
3. More simple but powerful tools which can further enhance creative potential
Cage Deformer: Increased Animation control
An extremely powerful set of animation, sculpting and Boolean plugins are available produced by the Japanese group 3rd Powers which enables the creation of complex 3d meshes in very little time in Lightwave Modeler, further enhancing the application’s organic expressive advantages as a creative tool.
Ryan Roye’s reviews on the Lightwave 3D forum summarise the power of 3rd Powers Cage Deformer, which is especially useful for enhancing facial expressions or correcting problematic deformations.
Cage Deformer is far better for working with low-to-medium detail meshes because you get the benefit of a “sculpting” workflow directly in Lightwave and don’t have to lose any element of preview-ability in order to make edits. One can make the mesh deformations work with existing rigs, and no MDD baking/export/import is required (you will have to export MDD to share work with people who do not have the plugins, however). As said in the video, the work done in CageDeformer can be layered on top of existing rigs; something Chronosculpt doesn’t do.
Chronosculpt: Sculpting in Time
Near the end of my project I made extensive use of Newtek’s innovative Chronosculpt which enables the editing of cached animation, again, extremely useful for further refining facial expressions in subtle ways. Ryan Roye’s observations below underline the production advantages:
Chronosculpt is better for high-poly meshes and completely obliterates Cage Deformer in terms of performance; removing the worry of mesh density, detail, etc affecting feedback while editing. The timeline is easier to manage and is scalable with bezier-editing available. One can motion-sculpt the entire character without consequence and with much greater control rather than just a small section of them, or having to worry about the density of the “Cage” limiting control. Chronosculpt also has mirror-editing capabilities unlike Cage Deformer.
Chronosculpt was originally marketed as a means of correcting previously uneditable mistakes embedded in cached files which had escaped attention at the primary animation stage. In practice I have used the application to “sculpt in time”, for example refining the movements of a characters facial features or fingers. Chronosculpt would be invaluable in any production houses which work with the transfer of mdd point cloud data for animating characters. It offers considerable scope for the embellishment of pre crafted animation.
Creative animation edits can easily be added to cached data using Chronosculpt which was primarily designed to correct faults. It is especially powerful for refining facial expressions.
Cached data used to be effectively “fixed” in time and space but Chronosculpt allows the user to further refine the animation using intuitive basic sculpting tools. Changes to the motion appear in the timebar and can easily be refined and saved out to morph targets which is a particularly flexible feature. A disadvantage is that edits are not currently nameable and so if edits are superimposed it can be difficult to distinguish them. Below is a closeup of facial features being edited in the program. Note the red circle “sphere of influence.”
Here the character’s eyebrow movements are manipulated over time to further convey intention, thought and emotion.
Groboto: Voxel Boolean Creativity
I’m particularly interested in exploring the creative potential of niche modelling apps made by independent creators. Despite its onorthodox UI and workflow Groboto is an application dedicated solely to non linear Boolean modelling and capable of producing particularly complex meshes which can be edited at any stage and exported as Wavefront objects. Groboto was designed by an artist and software engineer Darryl Anderson whose distinctive vision resulted in an app capable of producing extraordinarily complex geometric 3D designs either manually – the approach I adopted – or semi automatically. A disadvantage is that Booleans are restricted to a library of primitives which although expansive and edited within certain parameters does not allow the use of imported meshes.
The technology has since been acquired by the Foundry’s Modo application and developed into Meshfusion which now enables complete control over Boolean modelling although results in quite dense meshes.
Groboto was particularly useful in creating otherworldy geometric forms and so led to the alien dreamlike meshes connoting other worlds, particularly the glass angel factory sequence of my film. There are some allusions to HR Giger in the style which references elements of his biomechanical motifs and the gothic cavernous chambers of his backdrops. The designs gave me scope to create more expressive pieces which contrast significantly with the more precise real world architectural models. A disadvantage to this technique was the extremely dense meshes I created weighed the scenes down in memory. This restricted me working completely “in camera” and meant having to potentially rely on compositing
The exotic forms of the buildings in this shot are created by the voxel Boolean generator Groboto and exported as obj files which can then further edited in any modelling programme.
More reflections on file interchange and workflow considerations: From Lightwave to Maya and back again
The development of FBX interchange format owned by Autodesk since 2006 has liberated 3D asset creation as modellers are no longer restricted to single applications. FBX is especially advantageous as it is not simply restricted to modelling but also transfers materials, blendshapes and animations. This has made interchange between applications such as Lightwave, Maya and Zbrush for the most part stable and efficient. Once the animation is complete, industry practice now makes considerable use of Geocaching in the pipeline. In many professional production houses the Geocached data, is by default locked to avoid unintentional corruption and sent to staging animators whose particular focus is to set up the mise en scene, through cameras and lighting.
The FBX tech combined with MDD Geocaching further enables efficient interchange of animation data. The character meshes once exported are animated in Lightwave through “point clouds” assigned to the objects and joints and rigs are left behind in the translation. Transferring joints and rigs themselves between Maya, Lightwave and other apps such as 3DS Max can still be extremely problematic if specific procedures regarding rig construction and naming conventions are not adhered to. In this case it was not necessary to follow this route as Geocaching is so streamlined and economical.
I made considerable use of baked Geocached data from Maya to take advantage of Lightwave’s flexible biased ray tracing engine. Although the transfer of data works very quickly using the FBX and / or Alembic format, inevitably production time is lost cumulatively. Working completely in a single application would inevitably be faster but as I felt there was more creative potential using the Lightwave render engine in preference to Mental Ray.
I asked a number of technical questions and shared my workflow on the Newtek forum learning a great deal about the technical procedure. Extracts from the full discussion on the public Newtek forum are below, which will be beneficial to anyone experiencing issues transferring data between apps.
FBX Geocache workflow discussion
Maya animation -> baked Lightwave MDD files
I need to export an animated main character mesh from Maya into LW for rendering. Using Geo cache export from Maya and the I/O LW import controls in Layout, the baked *main character mesh* is imported into Layout, assigned mdd and animating OK in LW.
However the eyeballs and teeth of the character are *parented* rather than smooth blinded to the animated Maya skeleton, and unfortunately do not appear to have the same relative baked space time coordinates as the main mesh. Once imported into Layout and assigned mdd files they are floating about independently of the main character.
Can anyone offer suggestions as to how to get all the components working harmoniously together for import into Lightwave Layout?
You could try to “freeze” transformations in Maya for the eyes, then export the cache as a separated MDD.
Or you could try to export the scene in FBX, so you get the whole hierarchy and animation in LW.
3D Development, LightWave 3D Group/NewTek, Inc.
Ernest and Lino, many thanks for your replies. The scene is working great in Lightwave now.
As Lino suggested I isolated the eyes and teeth in Maya and exported them as seperate objects. For anyone reading interested in Maya <-> LW workflow, make sure “world” space (+ “float”) are ticked rather than “local” on all objects when running Export GeoCache in Maya.
I haven’t tried Maya -> LW FBX in any depth yet. Does this allow a complete animated Maya scene to be imported into Lightwave and vice versa? If so, that is an incredible advance. Just upgraded to LW 2015 so excited to see more improvements swapping files between applications
Now the animation tools such as Genoma and RHiggit are so advanced, it will be far easier to work directly in LW to make use of the brilliant VPR.
I haven’t tried Maya -> LW FBX in any depth yet. Does this allow a complete animated Maya scene to be imported into Lightwave and vice versa?
Not sure everything translates over. I know I’ve tried to fbx work files and not everything translated over. That may be due to the specific way the Maya team had their animation setup though.
Not sure everything translates over. I know I’ve tried to fbx work files and not everything translated over. That may be due to the specific way the Maya team had their animation setup though.
No luck, unfortunately, I tried importing the Maya FBX scene – skeleton, character and animation – into Lightwave but it came in as a mass of bones plus multiple blend shape objects. There are so many options in Maya FBX export it’s daunting, so a lot to learn. I’ll stick to Geo cache for now as it’s fast and dependable.
Blendshape facial expressions in Lightwave and Maya
This discussion led onto transferring blendshapes between applications. This was an area of particular interest as the Lightwave technology in the early stages of my project was in many ways faster and more efficient than Maya. Blendshapes for facial expression could be applied directly onto meshes in seconds without the Maya technique of duplicating meshes and transferring edited information. The immediacy of this approach was preferable and so all blendshapes were created in Lightwave and exported along with the models into Maya via FBX
When learning Maya I was surprised at how unintuitive the blendshape workflow was compared to simply embedding morphs in LW objects. In Maya, you copy a character’s head, framing it in the scene, manipulating vertices then projecting the morph back onto the original mesh, which might not work if it’s not stitched back onto the body in the right way. It seems faster to model morphs in Modeler and export FBX into Maya, preserving all the blendshapes: a significant step forward.
Animation Debates: Is there too much focus on narrative and character at the expense of other elements?
The conventional view of animated narrative is fundamentally entwined with character development. There are variations, but thousands of years of storytelling have crystallised into narrative formulae rooted in Greek classical theory. Essentially a protagonist embarks on a quest of some kind, usually reluctantly to maximise drama, to face challenges or obstacles which test them morally and / or physically to the limits of endurance. The protagonist may experience defeat at this stage, again amplifying dramatic potential but then is resurrected figuratively or even literally. The journey home, following what Vogler refers to as the symbolic “seizing of the sword” is not necessarily safe, thus creating further potential for dramatic conflict. The dynamics of a story may further be enhanced by enigma and a sense throughout that the protagonist is fallible in some way; vulnerable to unpredictable forces and perhaps a nemesis of some kind set up in binary opposition.
Chris Vogler, a Hollywood story analyst, developing the work of Cambell refined classical narrative into a formula he believed conveyed the universal appeal and power associated with Greek storytelling appropriate for contemporary audiences. His narrative model is based on mythic archetypes which he felt were embedded in the “shared unconscious” of the audience who have “universal concerns. The stages of his approach to storytelling are consistent with storytelling tropes associated with archetypal mainstream Hollywood. I’ve highlighted below the most viscerally exciting elements of Vogler’s model which filtered through into my own dream narrative.
Although animation is a medium which usually requires considerable forward planning, I have taken a creative risk by abandoning storyboards, setting out to create a narrative which has evolved intuitively and spontaneously. Drawing initially on dream fragments I’ve recorded over the years, my goal has been to develop short simple surreal vignettes into a cohesive self sustaining whole. This process is fraught with risks, and at several stages the whole project has threatened to turn to mist generating more questions than answers. Are the surrealistic elements engaging enough to hold the attention of the spectator? Can I turn technical limitations which threaten to dispel the suspension of disbelief to advantage? One question in particular has focused my attention: Does the narrative need strong characterisation to function or are there missed opportunities in taking the conventional approach to storytelling?
I attempted to pin my ideas down by referencing some key elements of narrative theory without allowing them to dominate or dictate the flow of the story, following the Kubrickian approach of giving visual elements predominance. The advantage of this approach is that juxtaposing these sequences led to a surfeit of unexpectedly interesting ideas and themes, many of which unfortunately I had to abandon due to time constraints. Psychologically, the spectator “fills in the gaps” when faced with seemingly disparate scenes, so in the editing process I was intrigued to see how far I could push this. Early film makers discovered that the “jump cut” was not simply a means of economising on storytelling, but stimulated the audience into proactively making connections. It was my hope that this would reflect something of the experience of “dream logic” where apparently absurd narrative developments are meaningful on a more subconscious level.
As my story began to take form I applied Vogler’s narrative theory in retrospect and found intriguingly that most elements of my tortuous dream story linked with his key stages
Further research led to the work of Pedro Serrazina, whose writing on space and mise en scene in his article Spatial constructions: A practitioner’s view of animated space in the Society of Animation Studies website highlights the risk of missed opportunities through the conventional focus on characterisation in narrative:
“As my animation career progressed, with the opportunity to direct my first film, The Tale About the Cat and the Moon (1995), I realized in early production that, compared to my animation colleagues, rather than focusing on character design, I was much more interested in something else: the overall spatial concept of the film and the placement of the virtual camera, the framing and, specifically, the animation of the whole landscape space. This approach lead me at first to an understanding of the animated space as a powerful visual and narrative element and then, eventually, as a tool of social reflection.”
Serrazina’s work is a virtuoso exploration of time and space. Its premise is simple: the spectator follows the nocturnal pursuits of a cat in a Mediterranean town, by means of a viscerally dramatic use of organic shifting line work and extraordinarily witty collisions of light and shade.
Serrazina’s perspectives gave me the confidence to proceed with the cinematic exploration of time and space without being consumed by considerations relating to character, which was a limitation from the start. Although stylistically my animation lacks the freeflow creative abandon of The Tale About the Cat and the Moon, I hope to capture a sense that the spectator is being taken on a journey across the vistas of a dream landscape in a more sustained hypnotic measured tone.
This reading led to potentially rewarding directions such as ignoring character development altogether in order to explore space, my interest in landscape, mood, tone and abstract emotion. Ultimately I felt more confident about proceeding with a relatively unsympathetic character who remains unknowable. Time constraints meant I couldn’t focus on facial expression as much as I had intended at the outset, so I attempted to turn this limitation to an advantage
Much of the narrative cohesion has evolved through the editing stage: a process akin to assembling and juxtaposing elements of collage. It’s important that the resulting narrative has cohesion in order to counterbalance its heightened surrealistic nature so the film is permeated with recurring binary motifs, for example angels and insects; gods and monsters; outer space and inner space. Symbolic dynamics are effective in creating dramatic conflict and giving a text context.
Applying Kubrick’s theory of narrative was particularly focused at the beginning of the creative process and at the end. I initially modelled and staged key sequences which I considered visually interesting, responding to feedback from professional 3d artists and animators on the Newtek forum. Once animated and rendered, the next step towards towards realising a spontaneously narrative was juxtaposing them through the editing process to create a surreal narrative which was meaningful in an unconventional way. Kubrick has suggested that the editing process is a particularly rewarding part of the film making process: “I love editing. I think I like it more than any other phase of film making. If I wanted to be frivolous, I might say that everything that precedes editing is merely a way of producing film to edit.”
I aimed to keep the premise is very simple: my protagonist misses his tram stop which leads to complications. This was borne of dream sensations in which I have become lost in unfamiliar environments, usually in search of an individual or occasionally a possession such as a camera or suitcase. My main goal was to capture something of the “otherness” of dreams and the extraordinary blend of the familiar and alien.
Further reading and research relating to narrative and dream: The Dream as Text, The Dream as Narrative by Patricia Kilroe.
Kilroe distinguishes between the experience of a dream itself and the dream report proposing that the report qualifies as a text with narrative elements. She quotes Barthes to establish context:
“Numberless are the world’s narratives. First of all in a prodigious variety of genres, themselves distributed among different substances, as if any material were appropriate for man to entrust his stories to it: narrative can be supported by articulated speech, oral or written, by image, fixed or moving, by gesture, and by the organized mixture of all these substances; it is present in myth, legend, fable, tale, tragedy, comedy, epic, history, pantomime, painting…, stained-glass window, cinema, comic book, news item, conversation”. (Barthes 1994:95).
Kill Roe argues that the dream report should be added to this list in her illuminating exploration of psychological and artistic perspectives, with a close focus on “metaphor, metonymy, and punning in the formation of dream imagery”. However, Barthes did not view dreams as analogous to classical narratives because they are “removed from the logico-temporal order.” They are incomprehensible because in his view they do not follow narrative conventions of a structured chain of events.
Freud on the other hand believed that a dream report, the “secondary revision” is integral to a dream itself. reportage is in Roe’s words a “text making narratizing process” occurring simultaneously and condensing the experience into a readable representation through the application of metonymy, selection and editing.
One perspective cited by Roe I find particularly illuminating is “The dream is a metaphor in motion (Ullman 1969), which captures the essence of what I aim to achieve in my project: an expressionistic open text woven together from disparate yet recurring symbolic elements.
In summary, Roe views the reports of dreams as texts born of subconscious experience, analogous to say, literary texts which arise through the selection editing and representation through metonymy of waking experience.
“A dream seems to be a steady disequilibrium, with no functional or thematic interest in solving or rounding out a problem. The narrative of the dream is concerned with ramifications of a tension, … not with getting me into trouble (or pleasure) and out of it, but with extending the trouble (or pleasure) to the boundaries of the feeling that produced the dream.”
Propp’s archetypes help focus characterisation and create a narrative dynamic.
Brogaard, Berit. “Lucid Dreaming And Self-Realization”. Psychology Today. N.p., 2016. Web. 6 Dec. 2016.
Dallow, Peter, “Art, Design & Communication in Higher Education 2 (1&2) pp. 49-46
Derakhshani, Dariush, 2013, Indiana, Introducing Autodesk Maya, John Wiley and Sons.
Drabble, Margaret, 8th January, 2011, Fay Godwin at the National Media Museum, Guardian [http://www.theguardian.com/artanddesign/2011/jan/08/margaret-drabble-fay-godwin]
Eagleton, Terry, 1983, Oxford, Eagleton, Terry, Literary Theory, An Introduction, Blackwell Publishers.
Collin, Robbie, 2014. Hayao Miyazaki interview: ‘I think the peaceful time that we are living in is coming to an end’, Telegraph [ONLINE] Available at: http://www.telegraph.co.uk/culture/film/10816014/Hayao-Miyazaki-interview-I-think-the-peaceful-time-that-we-are-living-in-is-coming-to-an-end.html. [Accessed 6 December 2016].
Earnshaw, Steven, 2006, Existentialism, a Guide for the Perplexed, London, Continuum.
Faerna, Jose Maria (Ed.), 1995, New York, De Chirico, Harry N Abrahams Incorporated.
Fineberg, Jonathan, 1995, Art Since 1940: Strategies of Being, London, Laurence King Publishing.
Harrison, Charles and Wood, Paul, 1992, Oxford, Art in Theory, 1900-1990, Blackwell Publishers.
Holub, Miroslav, 1990, Poems Before and After, Newcastle, Bloodaxe Books Ltd.
Hughes, Ted, 1972, Crow, London, Faber.
Jenkins, David PhD. 2012. The Nightmare and the Narrative. [ONLINE] Available at: https://www.uel.ac.uk/wwwmedia/microsites/cnr/documents/Jenkins.pdf. [Accessed 6 December 2016].
Koepfinger, Eoin, 5th June 2012, “Freedom is becoming the Only Theme”, Sampsonia Way 5th [http://www.sampsoniaway.org/blog/2012/06/05/freedom-is-becoming-the-only-theme-an-interview-with-jan-svankmajer/]
Lynton, Norton, 1990, The Story of Modern Art, Oxford, Phaidon.
Naukowa, Redakcja (Ed.), 2003, Warsaw, Igor Mitoraj, Wydawcy.
Oesterreicher-Mollwo, Marianne, 1978, Surrealism and Dadaism, Oxford, Phaidon.
Renner, Rolf Gunter, 1990, Koln, Edward Hopper, Transformation of the Real, Taschen.
Sagar, Keith, 1975, The Art of Ted Hughes, Cambridge University Press.
Saunders, Mark and Tosey, Paul, 2013, The Layers of Research Design, Rapport journal, ANLP.
Wells, Paul, 2002 Animation: Genre & Authorship, Columbia University Press