Content enhancement with augmented reality and machine learning
Justin FreemanA Bureau of Meteorology, GPO Box 1289, Melbourne, Vic 3001, Australia. Email: justin.freeman@bom.gov.au
Journal of Southern Hemisphere Earth Systems Science 70(1) 143-150 https://doi.org/10.1071/ES19046
Submitted: 21 March 2020 Accepted: 10 August 2020 Published: 1 October 2020
Journal Compilation © BoM 2020 Open Access CC BY-NC-ND
Abstract
Content enhancement of real-world environments is demonstrated through the combination of machine learning methods with augmented reality displays. Advances in machine learning methods and neural network architectures have facilitated fast and accurate object and image detection, recognition and classification, as well as providing machine translation, natural language processing and neural network approaches for environmental forecasting and prediction. These methods equip computers with a means of interpreting the natural environment. Augmented reality is the embedding of computer-generated assets within the real-world environment. Here I demonstrate, through the development of four sample mobile applications, how machine learning and augmented reality may be combined to create localised, context aware and user-centric environmental information delivery channels. The sample mobile applications demonstrate augmented reality content enhancement of static real-world objects to deliver additional environmental and contextual information, language translation to facilitate accessibility of forecast information and a location aware rain event augmented reality notification application that leverages a nowcasting neural network.
Keywords: augmented reality, content enhancement, environmental information delivery channels, machine learning, mobile device computing, neural networks, user centred rain nowcasting, weather forecast situational awareness.
References
Alves, O., Wang, G., Zhong, A., Smith, N., Tseitkin, F., Warren, G., Schiller, A., Godfrey, S., and Meyers, G. (2003). POAMA: Bureau of Meteorology operational coupled model seasonal forecast system. In ‘Science for drought. Proceedings of the National Drought Forum’, Brisbane, April 2003. (Eds R. Stone and I. Partridge) pp. 49–56. (Queensland Department of Primary Industries: Brisbane)Azuma, R. T. (1997). A survey of augmented reality. Presence 6, 355–385.
| A survey of augmented reality.Crossref | GoogleScholarGoogle Scholar |
Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., and MacIntyre, B. (2001). Recent advances in augmented reality. Comput. Graph. 25, 1–15.
| Recent advances in augmented reality.Crossref | GoogleScholarGoogle Scholar |
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. C., and Bengio, Y. (2014). Generative adversarial networks. In ‘Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2’, pp. 2672–2680. Available at http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: efficient convolutional neural networks for mobile vision applications. Available at https://arxiv.org/abs/1704.04861
Jones, D., Wang, W., and Fawcett, R. (2009). High-quality spatial climate data-sets for Australia. Aust. Meteorol. Ocean 58, 233–248.
| High-quality spatial climate data-sets for Australia.Crossref | GoogleScholarGoogle Scholar |
Li, M., and Mourikis, A. I. (2013). High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 32, 690–711.
| High-precision, consistent EKF-based visual-inertial odometry. Int.Crossref | GoogleScholarGoogle Scholar |
Mathieu, M., Couprie, C., and LeCun, Y. (2016). Deep multi-scale video prediction beyond mean square error. ICLR 2016. Available at https://arxiv.org/abs/1511.05440
Redmon, J., and Farhadi, A. (2017). YOLO9000: Better, Faster, Stronger. In ‘2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)’, Honolulu, HI. pp. 6517–6525
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., and Bernstein, M. (2015). Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 1–42.
| Imagenet large scale visual recognition challenge.Crossref | GoogleScholarGoogle Scholar |
Schiller, A., Brassington, G., Oke, P., Cahill, M., Divakaran, P., Entel, M., Freeman, J., Griffin, D., Herzfeld, M., Hoeke, R., Huang, X., Jones, E., King, E., Parker, B., Pitman, T., Rosebrock, U., Sweeney, J., Taylor, A., Thatcher, M., and Zhong, A. (2019). Bluelink ocean forecasting Australia: 15 years of operational ocean service delivery with societal, economic and environmental benefits. J. Oper. Oceanogr. 13, 1–18.
| Bluelink ocean forecasting Australia: 15 years of operational ocean service delivery with societal, economic and environmental benefits.Crossref | GoogleScholarGoogle Scholar |
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In ‘2016 IEEE Conference on Computer Vision and Pattern Recognition’, Las Vegas, NV, USA. pp. 2818–2826