Keywords

1 Introduction

Reproduction is an ongoing research project between Shanghai Academy of Fine Arts and MIT Media Lab, introducing a user-friendly system which can save and reproduce special moments for individuals in their daily life. According to statistics, there are more than 1 trillion selfies taken on social networks each year. There is an increasing tendency for people to photograph and post their photos on social platforms, those pictures including work, life, travel, food and pets. Among others, the convenience of taking photos by mobile phone is crucial to the fashion trend, more importantly, the merit of this phenomenon has reflected that people are blithe to record the moments at a particular time, at a specific place, or with a special person. It is highly memorable to look at the scene stored in the photo in a special day in the future. It has been around two hundred years since the camera was invented. Irrespective of black or color pictures, and dynamic photos to panoramic photos in different imaging effect, there is no remarkable difference in nature. While it is valuable to look through the taken pictures and recall memories for users. An obvious flaw in the picture is that only limited image information can be gained from a piece of photo. With the passage of time, it is most likely to forget the time, place, climate or accompanying people in the photo, even no memory of the experience. The overarching purpose of the study is to collect more environmental information through current technologies operated easily like taking photos, and to have better experience methods to look through these information for users.

The research takes into consideration the final outcome based on the needs of users. What kind of browsing environment can help users find memories of the past? The answer is to maximize the restoration of the original experience of the environment so as to wake up the user’s sense of space and existence at a particular time. A review of history, from the Sala delle Prospettive created by Baldassarre Peruzzi in the early 16th century, to the panorama of the 18th century, to the virtual reality that emerged at the end of the 20th century, has presented a situation that the creators have devoted themselves to using 360-degree image space to surround the audience in a closed environment which can create an impression in the image so that the audience will generate a sense of space and a sense of existence. As Wolfgang Kemp described it, panorama is “a space of existence”, the essence of which is to make participants feel trapped in the illusion of a real scene. With the popularity of 360 cameras in recent years, it has become more convenient to take 360 photos than to take ordinary photos, because they do not have to focus or composition, on the contrary, users only need to press the shoot button for recording the whole scene including themselves. In addition, it is convenient for users to immerse themselves in 360-degree panoramic space, browse the scene at any angle and have a good experience with the support of VR headsets. Therefore, the research takes 360 shooting and virtual reality as the main input and output modes.

2 Related Work

Over past the years, research in Wearable Device mainly centres on human health. Research in Spatial Augmented Reality has found innovative ways to include the physical home environment for entertainment and remote collaboration to create a more immersive experience. Another expansion of this research field is into accommodating further sensory modalities, e.g. mechanical, haptic or even olfactory manipulation.

3 Data Collection

When it comes to data collection, it mainly consist of environmental data and user data. The former refers to collect photos and environmental sound, because photography is the most direct way to record environmental features, and sound is a very important factor in scene reconstruction. In order to realize the environmental data collection conveniently and comprehensively, the equipment used for execution mainly includes a 360 camera and an ambisonic audio recorder, which can record the visual and auditory materials of the current environment quickly. On the other aspect, user data means the collection of heartbeat rate of users when they take photos. After collecting these basic data, more useful data will generate online to reconstruct the current environment for users by connecting some open data platforms in the later period. The 360 camera used this time is Ricoh Theta S. It can get GPS information by using a connected cell phone, so the metadata of the photos collected contains geographic information. After extracting geographic and temporal information, more environmental data can be obtained through an open data platform, including temperature, wind direction, humidity and local headlines (Fig. 1).

Fig. 1.
figure 1

The apparatus of data collection.

The ambisonic audio recorder harnessed to collect the environmental sound is ZOOM H3-VR, which integrates four ambisonic microphones to record the VR sound of four channels in real time. It is beneficial to locate the sound source in the virtual environment.

The 360 camera and the ambisonic audio recorder are fixed on a portable selfie stick, and the recording direction of the microphone must be consistent with the direction of the camera in the 360 camera to match the direction of the image and the sound source. Apart from the mentioned two equipment, the selfie stick also has a user-controlled and sound-recording phone, a thermal camera on the phone, a Gopro camera and a bluetooth camera switch.

The thermal camera is positive to the user, used to take selfies and record the temperature of the user and the surrounding environment. Due to the limitation that the open data platform can only get the outdoor temperature, the user can obtain the temperature data of the surrounding environment through the device in indoor environment. The direction of the Gopro camera which is in line with the user’s view is used to catch what the user sees. Going forward, users can tap the bluetooth camera switch to trigger the built-in application program for the collection work of all data in one-step via controlling the mobile phone.

The project requires users to wear a wearable device that records heartbeats when shooting 360 photos. Considering the phone used in the study is iPhone, out of the compatibility, the device used to record heartbeat data is Apple Watch.

The experiment to collect data is almost similar to take selfie with a selfie stick. The user can only point the front camera of the 360 camera at what they see in their eyes, because other devices have been installed in the right direction according to the position of the 360 camera. All equipment will be triggered after setting up all information in App, what users need to do is to press bluetooth camera switch for obtaining all data.

The information obtained from a recording process is as follows: A 360 view photo with time and location information, a five-second VR audio file, a thermal imaging selfie with user temperature information, a 5-s high-definition video file and the user’s heartbeats (Fig. 2).

Fig. 2.
figure 2

The way of using the apparatus.

4 Virtual Scene

The images, sounds and related data collected will be applied to create virtual reality scenes. The hardware used in this project is Oculus Go virtual reality glasses for its relatively cheap price and the characteristics of single-machine operation, so it is more suitable for ordinary consumers.

The purpose of creating a virtual scene is to recreate the moment saved by user and to visually, audibly and psychologically help user to reshape his/her previous experiences. The interactive scene of the user is set in a three-dimensional space. When the user wears the Oculus Go headsets, he/she can choose to view the thumbnail of the scene through a controller, which is very similar to the process of browsing photos on the computer. When the user selects the scene, the entire virtual scene will be surrounded by a 360 photos. The interactive process is very intuitive and users can freely change their views and look through the details of the environment in their own scenes. At the same time, you can also hear the sound coming from all directions of the scene, because the recorded sound matches the direction of the scene, so the sound heard by both ears will change when the user turns his head, genuinely reconstructing the original auditory feelings. All in all, The user can open the environment information menu through the controller. The menu includes the following six functions: (1) Weather Conditions (2) Heartbeats (3) Breathing (4) Geographical Location (5) News Headlines (6) Scene Restoration.

4.1 Weather Conditions

Weather conditions: obtain the weather information on the day of shooting on the open data platform according to the time and location of 360 photo. The information includes temperature, weather conditions, wind direction, wind speed and humidity. If the users are indoor, they can choose to view thermal photos to get indoor temperature.

4.2 Heartbeats

Heartbeats: user’s heartbeats is displayed several times per minute next to a heart-shaped button. When the user clicks on the button, the sound of the heartbeat appears in the scene, which is generated in real time in consistence with the heartbeats rate, and the whole scene will flash last for 5 s.

4.3 Breathing

Breathing: this is the breathing sound (breathing = heartbeat number/4) which is simulated according to the number of heartbeats. Although the frequency is not accurate, it plays a certain role in setting off the atmosphere.

4.4 Geographical Location

Geographical location: based on the GPS information of the 360 photo, the user can adjust the size of the display area by using the satellite view of the shooting place obtained by Google Map API.

4.5 News Headlines

News headlines: get local headlines on public platforms in light of the time and location of 360 photos. When the user clicks on the headline picture, there is a voice-over news profile.

4.6 Scene Restoration

Scene restoration: this is a function of mixing all the information, images, sounds, video images, and all the relevant data in this scene are deconstructed to produce a random effects of about 10 s, mainly including mingling of various sound sources, and changes in the light and shade of the scene, etc. (Fig. 3).

Fig. 3.
figure 3

The main scene in VR.

5 Archive Data

There are three options of saving and loading user file: 1. Open a standard 1 by 2 360 photo which contains GPS data information. 2. Open a SE(Shared Environments) file on use’s disk. 3. Make an archive SE-file based on user data. The software extracts key elements which mainly affect human’s feeling from raw data and then fuses them with open data to generate a personal SE file which stores information-rich data for future applications to represent user’s past monments (Figs. 4 and 5).

Fig. 4.
figure 4

The menu for saving and loading file.

Fig. 5.
figure 5

SE archive file

6 Limitations and Future Work

Sensory is the response of the human brain to the individual attributes of things that directly impact the sensory organs. The five main human sensations are visual, auditory, tactile, taste and olfactory, the first three of which are the top priority in human-machine interaction. Sensory plays a bridge role helping people to feel and recognize all kinds of attributes of external objects. Back in 1954, psychologists at McGill University in Canada made their debut experiment of sensory deprivation. In this process, the subjects were required to wear translucent goggles that made it difficult to produce vision; a monotonous sound emitted by an air conditioner was to limit its hearing; tactile sensation was limited through wearing paper sleeves and gloves on arms and securing legs and feet with splints. The subjects were left alone in the laboratory, and a few days later they appeared a series of many pathophysiological phenomena: delusional hallucinations, distracted attention, slow thinking, tension, anxiety, fear, etc. It is found that the development of the brain and their sophisticated degree are developed on the basis of extensive contact with the external environment. Living in the environment free from stimulation for a long time will weaken all aspects of the human organs. In this connection, it is an effective way to full explore their creativity with the help of a moderate stimulus to human perception. The VR devices at present used in the study are only limited to the ranges of visual and auditory stimulus to users, and in the future, more considerations will be taken into, for example, sensors to data acquisition and tactile and olfactory feedback equipment to output devices are added both. This can be more likely to restore the scenes saved by users.

In addition, an online platform will be built for users to upload and download SE files through their mobile devices or computers. On this platform, mobile applications that allow users to reedit their moments and share SE files with others. Once a user download a SE file, he/she can use the application to experience the representation of moments in Virtual Reality (Figs. 6 and 7).

Fig. 6.
figure 6

SE online platform.

Fig. 7.
figure 7

Development process.

7 Conclusion

When entering the virtual space, the weather conditions help users recall their body feelings, heartbeat and breathing sound is conductive to remind them of emotions. Meanwhile, location and news headlines support users to build the context of that particular location and time period. The audio-visual mixture created by the scene-restoration benefits the user to create a sense of familiarity of that period.

This study introduces a prototype system which is able to store and represents user’s moments and evaluates the key data which mainly affect human’s feeling and fused them with open data to generate a virtual space of reproduction and designs a user-friendly interface both for hardware and software.