DM7903 Week 12 – Final Proof of Concept and Deliverables

To present a Proof of Concept for the Multimedia Centre Augmented Reality mobile application (‘MMC AR’ app, for short), I have produced a range of deliverables.

Firstly, I have recorded a 10 minute presentation that would be given to stakeholders, such the Multimedia Centre staff and the Director of IT. Wider stakeholders such a members of funding panels as well as the University’s Health and Safety Officer would also need to be shown the presentation so that the mobile application’s viability can be considered from many angles. The presentation itself demonstrates the functionality of the application alongside accessibility features, funding opportunities, and development challenges. I have also taken the opportunity to explain how research in a prior module, DM7921: Design Research, has influenced the design process.

To supplement the presentation, I have attached a silent video of the four user-flows that are demonstrated within the presentation. This will allow stakeholders to view the flows in-full, where the presentation does not.

Appreciating that some stakeholders would prefer a hands-on demonstration of the mobile application, I have also provided links to all four user-flows. In real-life, I would aim to pitch the presentation face-to-face, offering a demonstration iPhone for users to interact with the four flows, created in Adobe XD. This will allow stakeholders the experienced the most realistic simulation that a high-fidelity prototype could provide.

Finally, I have included an academic poster, which I  created for the University’s Academic Poster Exhibition, which illustrates the accessibility features within the mobile application.

DM7903 Week 11 – High Fidelity Prototype Development

This week I started development on a High Fidelity Prototype of the Multimedia Centre AR mobile application. I decided to work in Adobe XD on this occasion as I already had a working knowledge of the software, but had the opportunity to build on this by learning features that were new to me, such as integrating sections of video, and producing multiple ‘Flows’. 

Flows

Before building the prototype, I had to take note of who the intended recipient would be. In my previous project, DM7917: Emerging Media, the recipient would be a client who was interested to try a hands-on simulation of the mobile application. For this reason, I developed the prototype so that they could tap on different menus and access each screen in a non-linear fashion. 

However, for this module the inclusion of AR-based videos, meant that the prototype had to be accessed in a linear way, by interacting with/viewing each screen in sequential order – so I built four ‘Flows’ that the client could preview:

Flow 1 – Onboarding with VoiceOver, including Freeze Frame creation
Flow 2 – Reviewing Freeze Frames
Flow 3 – Object Selection Screen 
Flow 4 – ‘Ask the Trainer’ Feature

I planned to then screen-record myself accessing these flows and compile the footage into a Proof of Concept presentation later. This is an ideal way to control a pitch to a client, as I can explain the purpose of each flow to the client, stakeholders, and other developers, whilst clarifying any intricacies and ensuring that the prototype is demonstrated exactly as I intend it to be. 

I have found prototyping with videos to be challenging, especially when the experience is required to be paused at specific moments, or if the user is expected to flow between screens at certain moments of video playback. Adobe XD does not allow for two timer triggers to be allocated to one screen/artboard. As a result, I’ve had to use the timer trigger to start video playback, then progress between screens manually when recording a video of each flow. This is not ideal, but does allow me to achieve a working prototype. One further set back is that if I do not time the progression between screens perfectly, videos reset to their first frame, causing a jarring effect that compromises the professional images of the prototype. 

Above: A GIF animation of the jarring effect caused by video files resetting to their first frame

Use of Adobe XD Features

At the High Fidelity stage of the prototyping process, I could turn my attention to transition animations between screens/artboards. These would provide subtle communication to the user, indicating the status of the mobile application. For example, the ‘Slide Left’ animation is shown to the user once the ‘Ask the Trainer’ function is triggered. The sliding-in animation should communicate that the user has only temporarily side-stepped from the AR experience, and that the AR experience will be restored once they return form their conversation with the Trainer.

Above: A GIF animation showing the ‘slide’ transition used to introduce the ‘Ask the Trainer’ UI

When required, the ‘Auto-Animate’ feature was used to create ‘menu drawer’ effects for the Object Selection and Freeze Frame screens. This animation allows the user to make use of app functionalities whilst not leaving the AR experience. I’ve also implemented this in such a way that these functionalities are within the ‘reachable zone’ of the users’ thumbs.

These ‘menu drawers’ were somewhat successful in the high fidelity prototype, but sometimes a dissolve animation was shown on playback despite ‘Auto-Animate’ being selected. I suspect that is a limitation of the software, as the ‘Auto-Animate’ option is already being used to continue the AR video across multiple screens/artboards, so when trying to apply it to a screen/artboard twice a dissolve animation is shown.

Another feature that was particularly useful was the ability to add ‘states’ to UI objects. I made use of this when creating the ‘Freeze / Thaw’ toggle switch. In Flow 1 I set the switch to it’s default ‘Off’ state so that it could be toggled to its ‘On’ state when demonstrating the Freeze feature. When toggled, the switch would illuminate green and move to the right-most position while also advancing the prototype to the next screen/artboard. This was the first time I had linked a rich feature with an outcome – in my previous projects the switches were useable but didn’t advance the prototype in any way.

Above: A GIF animation demonstrating the change in state of the ‘Freeze’ toggle switch

Similarly, I built on my knowledge of the ‘repeat grid’ tool when developing this prototype. Previously my knowledge was limited to using the functionality as a time-saving technique, producing many UI elements efficiently. However on this occasion I identified individual images within the repeat grids, making them tappable links to advance the prototype to the next screen/artboard. Two examples of this are the ‘Freeze frame’ drawer and the ‘Object Selection’ screen. 

Above: A GIF animation demonstrating the Object Selection screen

Next Steps

When producing this high-fidelity prototype, I’ve tried to keep its purpose in mind throughout the entire process. As well as using the prototype to demonstrate a Proof of Concept to the University (including all potential stakeholders), the Adobe XD file may be used by developers to build and code the mobile application itself. For this reason, I’ve demonstrated how annotation could be provided with each screen/artboard, explaining how interaction between the user and app should work, as well as when certain changes in state should occur for UI elements.

Once I have compiled and presented each flow of this prototype to all stakeholders, I would then wait for instruction on any final amendments or queries to be resolved. Once the final approval is given, then hypothetically (if this brief were a ‘live’ one) I would work alongside the University’s marketing team to hand-off the high-fidelity prototype to an internal or external development team, providing them with relevant extracts of the University’s Branding guidelines alongside any files such as typefaces and graphics. 

Links to Flows

Below are four hyperlinks to interactive presentation versions of each flow within the prototype. I have also provided a Youtube videos of the flows, and intend to demonstrate them within my final Proof of Concept.

Flow 1 – Onboarding with VoiceOver, including Freeze Frame creation
https://xd.adobe.com/view/66c075fd-fc6d-4779-969a-efedc36c2600-3393/?fullscreen

Flow 2 – Review Freeze Frames
https://xd.adobe.com/view/b381bac3-f6e5-4508-a812-20aff89dcdd9-e6d2/?fullscreen

Flow 3 – Object Selection Screen 
https://xd.adobe.com/view/bb19391e-bcd6-4b53-a2f9-d2370fc33ef4-ee4a/?fullscreen

Flow 4 – ‘Ask the Trainer’ demonstration
https://xd.adobe.com/view/75f2879b-ae64-42d1-9637-bf59f2509110-20a1/?fullscreen

DM7903 Week 10 – UN Sustainable Development Goal 17

To ensure that I’m achieving the DM7903: Design Practice Learning Outcomes, I’ve decided to consider my project from the perspective of the United Nations’ Sustainable Development Goal 17: Partnership for the Goals.

Throughout my time on the MA Digital Media Practice course, I’ve established a partnership between myself (acting as an App Designer and Media Trainer) and the University of Winchester, with the primary goal of increasing student and staff access to technology in the University’s Multimedia Centre (MMC). By establishing this partnership with the University, I have demonstrated an awareness of an entity that consists of like-minded individuals, with whom I can build a working relationship (United Nations and The Partnering Initiative, 2020, p. 31). In collaborating with the MMC team, I have consequently been able to enquire about the needs of their students (e.g.: the research in my DM7921: Design Research module), and find opportunities for innovative projects such as my design work in DM7917 and DM7903 (United Nations and The Partnering Initiative, 2020, p. 31). 

This approach of aligning with a team and actioning innovative design ideas is an important one, which may epitomise the goal of partnering to achieve the UN Sustainability Goals. 

As the University is looking to grow throughout the next decade as part of its ‘smart growth target’, it may need innovative solutions that are scalable. I understand that the University is looking to expand its existing courses, teaching many more students, potentially placing strain upon areas such as the training provision in the MMC (The University of Winchester, 2021). This expansion may also increase the demands for media equipment as students continue to work towards meeting their module assessments. 

In this module, I am developing a proof-of-concept for an AR-based training provision, which has potential to be scalable as training demands grow. A mobile app would be a good distribution method for this AR experience, as it could be downloaded for free by many users across both iOS and Android platforms, then used as a training resource at any place and any time. Trainees would be permitted to train both remotely and asynchronously, reducing the need for students and staff to travel from training. 

The change in training provision and requirement of remote learning would take some time to be fully adopted. During this time, I would see the University as an ‘enabler… providing the space and time for the platform to develop’ (United Nations and The Partnering Initiative, 2020, p. 28).

The positive environmental impact of removing the need for on-campus attendance in training sessions would help the University on its journey to becoming carbon neutral, and address UN Development Goal 13 ‘Take urgent action to combat climate change and its impacts’ (United Nations, 2021). This may also reduce the capacity pressures of the University’s estate, as increasing capacity demands could be offset by training being delivered remotely.

However, one limitation of the Multimedia Centre AR application is that there are currently no plans to include any assessment tasks. This means that trainees’ competency with MMC technology cannot be assessed remotely or asyncronously. This limitation could be overcome by a mobile application such as the prototype I designed for the DM7917: Emerging Media project, which featured ‘hazard perception’-style assessments and reaction-time testing (Helcoop, 2021).

As well as increasing access to technology for its students, the AR-based training app could enable further knowledge sharing within the wider community, including providing support for commercial use of the University’s media equipment, even outside of office hours. An example of this would be a school whose staff book one of the University’s auditoriums and related equipment to host and record a nativity play. The app would enable the school’s staff to train themselves on the required equipment in advance of the booking date.

Furthermore, there is also potential for this AR-based training provision to be replicated by other universities and science/technology-based industries, as the UN strives for a common agenda on partnerships both nationally and globally (United Nations and The Partnering Initiative, 2020, p. 15).

References

Helcoop, C. (2021). DM7917 Emerging Media Student Directed Project. [online] Christopher Helcoop Portfolio. Available at: https://christopherhelcoop.winchesterdigital.co.uk/index.php/dm7917/ [Accessed 20 Nov. 2021].

United Nations (2021). Goal 13 | Department of Economic and Social Affairs. [online] United Nations Sustainable Development. Available at: https://sdgs.un.org/goals/goal13 [Accessed 23 Nov. 2021].

United Nations and The Partnering Initiative (2020). Partnership Platforms for the Sustainable Development Goals: Learning from Practice. [online] United Nations Sustainable Development. Available at: https://sustainabledevelopment.un.org/content/documents/2699Platforms_for_Partnership_Report_v0.92.pdf [Accessed 22 Nov. 2021].

The University of Winchester (2021). Staff Open Meeting with ELT.

DM7903 Week 9.1 – Usability Test 3 Report

Report

00:00 – Tester is asked to read introductory paragraph
00:39 – Test started
00:39 – Tester made aware of sound usage in the prototype
00:50 – Tester completes V/O instruction to tap the target to place a camera
00:55 – Facilitators observes as Tester begins to swipe screen, following instruction to move camera around. A limitation of the prototype is that swiping functions do not work. This function is skipped once Facilitator observes Tester’s behaviour
01:07 – Tester follows instructions quickly to rotate the camera
01:25 – Tester follows instructions to Freeze / Thaw the AR experience
01:48 – Tester follows instruction to proceed to the Object Selection screen

Summary

I felt that this is the best test was very successful. I feel more confident that I’ve been able to compile the tasks for these tests in a more succinct way; this has allowed them to become more purposeful and efficient to analyse.

Covering up the written instruction was a successful way of measuring the clarity and effectiveness of VoiceOver instructions within the application. Of course, a limitation of this is that the user cannot re-read instructions, so must be able to comprehend the instructions they are hearing. This was not a problem during this test, however, so I’m quite satisfied that the VoiceOver is sufficient.

No issues were highlighted during this test so I’m confident that I can proceed to the high-fidelity prototyping stage.

Actions

Proceed to high-fidelity prototyping stage. Export the VoiceOver instructions for the next prototype as they do not need further recording.

Informed Consent Form

DM7903 Week 9 – Justifying my Onboarding Process

When creating this project’s first paper prototype for usability testing, I decided to implement an onboarding process to help first-time users learn the process of using the app. Users of many productivity and media mobile apps become disengaged by the seventh day of use, and this could be exacerbated by the potential for users to lose interest due to confusing or challenging wayfinding navigation mechanisms (Kapusy and Lógó, 2020). 

In their Case Study of the popular social messaging mobile application, Snapchat, Kapusy and Lógó identify Snapchat’s ‘learn by doing’ approach to onboarding. Snapchat features no tutorials, opting to immerse the user instead. In their research Kapusy and Lógó, found that users who managed to ‘learn by doing’ felt a sense of satisfaction and relief after the onboarding process, although pragmatically it found that users could make mistakes due to lack of instruction, leading to frustration. 

Although I hadn’t yet read Kapusy and Lógó’s research prior to designing my onboarding mechanism for the Multimedia Centre AR app, their findings did tie-in with my expectations. I could have implemented a ‘learn by doing’ approach in the Multimedia Centre AR app, especially if I adopted a user interface and set of controls that behaved in a manner that the user recognised, i.e.: a standardised interaction scheme that is shared across many AR mobile applications (Craig, 2013).  The success of such a method is proven to function similarly to Jakob Nielsen’s Law of Internet User Experience, which states the users prefer websites to function in the ‘same way as all the other sites they already know,’ by adopting widely used conventions and design patterns (Nielsen, 2004). However, the user’s success in the onboarding process could then be seen as dependent upon their prior knowledge of other AR mobile applications. As my user base may feature students who have motor impairments, and knowing that many AR mobile applications are ’not specifically designed in advance for non-visual interactions’, I felt that relying upon prior knowledge could have been a weakness in my final design (Herskovitz et al., 2020).

When considering an onboarding process for the Multimedia Centre AR mobile app, I initially considered creating a set of paged screens that could be swiped across, otherwise known as a ‘Deck of Cards’ (Joyce, 2020). These screens would only be shown to user on first-launch of the app, and would comprise of short tips, explaining several features of the app in an order that resembled how to use it. Despite using this method in the DM7917 module, I decided against it for DM7903, as I thought that I would need to communicate quite a lot of information to the user (such as object placement, rotation, and movement controls, Freeze Frame information, amongst other interaction guidance), which could lead to memory strain (Joyce, 2020).

In contrast to the ‘Deck of Cards’ approach, which could used for promoting a few ‘need-to-know’ instructions, I resolved that the ‘Interactive Walkthrough’ methodology of onboarding would be more suitable for communicating instructions. This method would permit a ‘learn-by-doing’ approach to an extent, while also providing guidance to user as they follow a potentially unfamiliar and novel design (Joyce, 2020). The interactive walkthrough would only be shown on first launch and follows the the initial steps of placing an object, interacting with it, then creating a freeze frame and reviewing it – all basic functions that would be completed in the same order regardless of whether the user needed the onboarding process or not; for this reason, I believe that my choice of onboarding process does not come with a higher interaction cost or lower user performance (Joyce, 2020). 

In justifying my choice of onboarding process, I have undertaken a small journey of research into the world of onboarding. Prior to this, I did not realise the wealth of onboarding methods and relating studies that had already been conducted into this aspect of Human-Computer-Interaction (HCI). This is certainly an area that I would be interested to explore in the future, potentially in my DM7908: Independent Study module.  

References

Craig, A.B. (2013). Understanding Augmented Reality: Concepts and Applications. San Diego: Elsevier Science & Technology Books.

Herskovitz, J., Wu, J., White, S., Pavel, A., Reyes, G., Guo, A. and Bigham, J. (2020). Making Mobile Augmented Reality Applications Accessible. ASSETS ’20: International ACM SIGACCESS Conference on Computers and Accessibility. [online] Available at: https://dl.acm.org/doi/10.1145/3373625.3417006 [Accessed 26 Sep. 2021].

Joyce, A. (2020). Mobile-App Onboarding: an Analysis of Components and Techniques. [online] Nielsen Norman Group. Available at: https://www.nngroup.com/articles/mobile-app-onboarding/ [Accessed 10 Nov. 2021].

Kapusy, K. and Lógó, E. (2020). User Experience Evaluation Methodology in the Onboarding Process: Snapchat Case Study. Ergonomics in Design: the Quarterly of Human Factors Applications, p.106480462096227.

Nielsen, J. (2004). The Need for Web Design Standards. [online] Nielsen Norman Group. Available at: https://www.nngroup.com/articles/the-need-for-web-design-standards/.

DM7903 Week 8.2 – Usability Test 2 Report

00:00 – Tester is asked to read introductory paragraph
00:40 – Test started
00:53 – Tester visibly appears confused
01:05 – Tester explains that they intend to move/rotate the phone or swipe
01:13 – Facilitator explains the limitations of a medium-fidelity prototype test. Movement of phone will not work, but tester’s intentions are noted
01:25 – Tester attempts to swipe camera to rotate it – is this an expected behaviour? Swiping = rotation of object
01:39 – Tester taps “AR Mode” and facilitator assures tester that AR Mode is already active
01:50 – Tester taps “Object Selection” in an attempt to move the process forward – The testers are clearly stuck and unsure what to do
01:55 – Tester taps the “Ask the Trainer” button on screen – it took a long time to get to this? Perhaps it could move repeatedly to get the user’s attention?01:58 – Tester verbally confirms their realisation that the “Ask the Trainer” feature could help
02:08 – Facilitator explains that keyboard functionality is limited in a medium fidelity prototype
02:30 – Helpful factoid appears on screen
02:50 – Tester confirms they now understand how to rotate the object and explains that they’ve found the process helpful
03:05 – Tester presses the “Back” button
03:07 – Tester rotates object repeatedly according to the instructions given

Summary

I’m pleased I’ve been able to shorten my usability testing process. This makes it much easier to focus on a particular feature and to evaluate its performance in the test. After the test had completed I felt as though no issues had been found. However reviewing this report in hindsight a few days later I realise that some glaring issues have been pointed out…

  1. The tester’s instinct is to swipe on the object to rotate it, not to tap it. I wonder if this aligns with other augmented reality mobile applications i.e.: a “standardised interaction scheme” (Craig, 2013)?
  2. The Tester spent ~30 seconds in a confused state, rather than seeking help. There is a possibility that the tester only persisted because of their awareness of the test. An end user may have left the application frustrated and not achieved their training. Eventually, the test found the “Ask the Trainer” facility and was able to resolve the confusion. Perhaps the “Ask the Trainer” feature should be more prominent? I could consider placement, colour, and animation of the feature to make it more prominent 

Actions

  1. Support both tapping and swiping gestures for rotating the object
  2. Make the “Ask the Trainer” feature more prominent. I could consider making alterations to the placement, colour, timings, and animations

Informed Consent Form


References

Craig, A.B. (2013). Understanding Augmented Reality: Concepts and Applications. San Diego: Elsevier Science & Technology Books.

DM7903 Week 8.1 – Journey of Producing Medium Fidelity AR Experience

I’ve produced a short, narrated video of journey towards creating a medium-fidelity AR experience for the Multimedia Centre AR mobile app. The video shows development from photographs of objects, to experimentation in Adobe After Effetcs, Cinema 4D, and Apple’s Object Capture and Reality Composer applications. I also explain and rectify some of the challenges that I experienced along the way. 

DM7903 Week 8 – Marwell Zoo Mini Project

In lecture this week we were set a design challenge to produce a mobile application for Marwell Zoo, which built upon some of the current application’s shortcomings. I paired up with a fellow student to turn the challenge into a collaboration opportunity – my role was primarily to facilitate the ‘flow’ of the experience, so I chaired the development of the Content Directory, Structure of Experience, and wireframing, whilst my collaborator formalised our design research into a presentable format, contributed to our research, and worked on the visuals (graphics, logo, and gamification). 

First Thoughts
Our initial research involved making observations about the current mobile application. We found the usability to be quite poor, due to the placement of buttons, lack of tab bar, and lack of consideration for users vision impairments.

Marwell Zoo’s mission to connect people with nature and consideration of bio diversity inspired us to ‘gasify’ the app by producing a trail–like experience whereby users could visit different animal enclosures and steadily solve a mystery. We didn’t resolve what the mystery would be at this stage, but we did decide that uses could unlock new clothing items for an avatar that represented them on the zoos map, and they would unlock audio tapes which had information about the animals they were seeing.

Above: A list of our initial thoughts and notes

Design Research
Below is a diagram of our design research journey for the Marwell app, alongside some of our visual observations.

Above: Our design research journey for the Marwell app

Structure of Experience
Opting to create a small mobile application in the time that we had, we decided upon hey small structure of experience. The amount of screens required would fit nicely into a tab bar – allowing users to see all of the screens as possible from the ‘Safari/Map screen’ (essentially, the Home Screen), rather than hiding them in a hamburger menu.

Above: The structure of experience

Content Directory
A quick Content Directory was created within 10 minutes, allowing us to move on to the paper-based wireframing task ahead of us. We would use this Content Directory to make sure that all the required artefacts for each screen were place on their respective wireframe. 

Above: The content directory

Quick Wireframes on Paper
We decided that we wouldn’t commit time to wireframing the ‘Settings’ and ‘More’ screens, as these would essentially be lists of selectable menu items. We were more interested in producing the ‘creative’ aspects of this short challenge within the lecture time that we had.

Above: Our quick paper wireframes

Logo Design
Using some stock imagery and the Marwell zoo logo my collaborator produced a visualisation for a graphic/logo to be used in the mobile application. This was an excellent task to complete at this stage, as it established an agreed visual identity that could be carried across to the high fidelity prototype.

Above: The logo, designed by Tina Scahill


High Fidelity Prototype (in Adobe XD)
To begin producing a high fidelity prototype, we started by moving our paper wireframes onto Adobe XD. We took inspiration for the colour scheme from the Marwell Zoo logo (as above).

Above: How our initial paper wireframes translated on Adobe XD
Above: Our final high-fidelity prototype in Adobe XD
Above: A video walkthrough of our high fidelity prototype

DM7903 Week 7.2 – Medium Fidelity Prototype Creation

Having tested the navigational elements and the app functions in a usability on my paper prototype, I felt ready to begin the medium-fidelity prototyping phase. In real life I would have spent more time re-testing the paper prototype, however because this is a university module I don’t have that time. 

To begin I recreated each page from my paper prototype on Apple’s Keynote software. This software was perfect for my requirements, as it allowed me to make use of Apple’s Human Interface Design Guidelines, complete with user interface elements that I could borrow. It would also be able to function on my phone for usability testing.

The development process took much longer to build, compared to the paper prototype. I included fixes to issues identified by my previous usability tester (e.g.: change the ‘save’ terminology to ‘export’). I also included the University’s branding, such as colour palette, graphics, and typeface. This would enable the app to benefit from synergy – I.e.: the University’s good reputation would cause the app to feel like a trustable learning resource.

I also developed the prototype so that it adhered to common practices of app design, such as the inclusion of a loading screen, tab bars, and a navigation bar. I employed a cyclical, iterative, method to producing the prototype, allowing myself time for reflection between each version. 

In the final version of the medium fidelity prototype it was imperative that including hyperlinks between each screen and the tab bar elements. This allowed me to carry out a usability test with my own iPhone and the Keynote software built into.

Colour Contrast

The third iteration of the medium fidelity prototype highlighted some potential colour contrast issues. I researched each colour foreground and background colour combination and assessed them against the web content accessibility guidelines for WCAG AA compliance, which is an “acceptable level of accessibility for many online services” (Digital Accessibility Centre, 2017):

* The first colour combination on the above slide fails due to the combination of low-colour contrast ratio and small text size (on a small phone screen). To rectify this, I decided to replace the University’s green tint colour with the main tint colour instead (as below)

** System-based colour contrast issues can resolved in the operating system’s menus. I have no control over these, so must overlook them for now and focus on my app experience.

References

Digital Accessibility Centre (2017). The icing on the cake: The difference between AA and AAA compliance – Digital Accessibility Centre (DAC). [online] www.digitalaccessibilitycentre.org. Available at: https://www.digitalaccessibilitycentre.org/index.php/blog/20-diary/187-the-icing-on-the-cake-the-difference-between-aa-and-aaa-compliance [Accessed 28 Nov. 2021].

DM7903 Week 7.1 – After Effects and Cinema 4D Experimentation

This week I filmed some test footage for experimenting with with photogrammetry scans that I’ve created using Apple’s Object Capture API. The footage was filmed in a classroom and depicted the camera panning across the  room and then lingering on an empty table, steadily rotating and zooming in on the flat table surface. When filming, my thoughts were that if I placed a virtual object such as a camera on that table, I could give the illusion that I was purposefully getting a better look at the camera’s interface by moving around it.

In the following explanation, I follow some of the processes outlined in this tutorial: https://helpx.adobe.com/after-effects/how-to/insert-objects-after-effects.html

Having filmed some test video footage, I could proceed with initial experiments, starting with Adobe After Effects. As was not looking to create an interactive version of the prototype, but rather a version that I could present to potential investors and stakeholders, I only needed to create video footage that looked like the augmented reality experience. 

The below example evidences an experiment whereby I placed my photogrammetry scan of a lens onto a table using a combination of Adobe After Effects and Cinema 4D Lite. I found this process to be very difficult, as I am not au fait with working in 3D software – it is far from the two-dimensional, design field that I’m used to working within, such as photography and graphic design. Working with four program windows, covering the X, Y and Z axes, was challenging, however I feel I was able to use the effectively to create a rudimental example of my vision. 

The amount of time required for me to produce this video was easily the largest drawback of the method. The rendering time for this 25 second video was well over one hour. Although this doesn’t sound like much, it’s a lot when you consider the need for iterations and further tweaks. 

I carried out a similar experiment using Apple’s Reality Composer software, which is built into Xcode. In this software I could construct AR experiences using the .usdz objects that I had create via photogrammetry. My first attempt of this wasn’t very responsive, potentially due to the large file sizes of my photogrammetry scans. Once I had reduced the quality (and file sizes) of the scans, the outcome was much more responsive. I’ll post a blog post about this shortly…