Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / artificial-intelligence / deep-learning

Live Lightning Detection with Deep Learning and Tensorflow on Android: Exploring the Outcome

5.00/5 (1 vote)
18 Nov 2020CPOL3 min read 5K  
In this article we’ll go over the project outcome and put together some lessons learned for your future live detection tasks.
Here we’ll discuss the project outcome and lessons learned – how the approach we’ve followed can be utilized for similar detection tasks.

Introduction

In the previous article, we’ve discussed the real-time testing of our app on two Android devices.

In this one – the last of the series – we’ll go over the project outcome and put together some "lessons learned" for your future live detection tasks.

Outcome

In this series of articles, we’ve covered:

  • An introduction to Deep Learning
  • How to gather images to curate a dataset
  • Usage of Teachable Machine for training and exporting a TFLite model
  • Basis Android setup
  • Setting up an Android app based on the exported model
  • Manual real-time testing of the lightning detection app on two Android devices

Lessons Learned and Left to Learn

You can develop a DL-based TFLite model and implement it in an Android app to detect lightning. Congratulations!

The icing on this cake is that you are now able to apply the same procedure and logic to detect any object – you just need to train your model on an appropriate dataset. Isn’t that amazing?

Let’s say you want to detect dogs. There are plenty of available datasets for that category. Gather the images and then train your model, with a little bit of adjustment - things like changing the learning rate and other hyperparameters, or gathering a larger data set to increase your model’s accuracy.

Now if you think of building an app to detect dogs and classify them based on their breed, you can do the classification using much the same concept we’ve discussed. Follow this tutorial to the "Get information about labeled objects" section to see how you can train your model with the different classes for the various breeds. Note the breed-based class names to be used as tags during the supervised model training.

If you want to build an app with a more advanced, interactive UI, you may consider saving the live stream video – see the API documentation and tutorial or sharing it on a social media platform to show the world that you’ve detected lightning today – and that you have proof of it!

What we’ve done on Android can be done on iOS as well – have a look at this guide. You can also create a custom classification model on iOS.

Our rather basic project can be enhanced in a number of ways. For example, you can use object recognition in Augmented Reality-Virtual Reality (ARVR), a.k.a. Mixed Reality (MR) – see this developer portal and this guide.

Next Steps

We’ve learned how to gather a training dataset, train a model, build an Android app based on that model, and manually test the app. Without a doubt, you can now do similar projects from scratch, on your own.

But we’ve just scratched the surface of what’s possible! The same techniques you’ve learned here can be used to solve other problems, and run on other types of devices. For example, you might want to build a lightning detector that runs on iOS, as we discussed above. Or you might want to run it on a Raspberry Pi or even a laptop computer instead.

Better yet, the skills you’ve learned here are easily transferable. For example, if you want to detect things other than lightning, you can follow the same process you’ve learned in this series using a different set of training images.

If you want to learn more about the fundamentals of using TensorFlow and Keras to train your own model(s), there are tutorials available here on the CodeProject website.

Happy coding!

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)