Over the last couple of months, I have been going through a lot of literature about human action recognition using computer vision. In this post, I will share a brief survey of Human Action Recognition. I will focus on literature from 2012–2019, as most of the earlier literature, relied on feature extraction and for the past few years neural networks have been outperforming the manual techniques.
Human action recognition is a standard Computer Vision problem and has been well studied. The fundamental goal is to analyze a video to identify the actions taking place in the video. Essentially a video has a spatial aspect to it ie. the individual frames and a temporal aspect ie. the ordering of the frames. Some actions (eg. standing, running, etc.) can probably be identified by using just a single frame but for more complex actions(eg. walking vs running, bending vs falling) might require more than 1 frame’s information to identify it correctly. Local temporal information plays an important role in differentiating between such actions. …
The NFC Data Exchange Format (NDEF) is a standardized data format that can be used to exchange information between any compatible NFC device and another NFC device or tag. The data format consists of NDEF Messages and NDEF Records.
In this series of articles, I will explain how an NDEF message can be constructed and stored on an NFC tag. Assume a company wants to issue tags that can be used in public transport systems as a replacement for paper tickets. These tags can be tapped on an NFC-enabled Android device which will scan the tag and register the entry and exit time of the user. …
In this short post, I will walk you through the steps for streaming a live camera feed from a DSLR connected to the Raspberry Pi. We will be using gphoto2 for interfacing with the camera, FFmpeg for encoding the video, and FFserver for hosting the feed on a local webserver.
For this tutorial, I am assuming that you have a Raspberry Pi with Raspbian or Noob OS installed on it. You can get a Raspberry Pi from Amazon if you don’t already have one.
Also, I am assuming that you already have one of the supported cameras listed here:
I used the Canon Rebel T7 camera which I got from Amazon. …
In this short post, I will walk you through the steps for controlling and capturing pictures from a DSLR connected to the Raspberry Pi using a USB cable. There are numerous tutorials out there for this setup but I found them to be outdated to some extent. This post covers the installation of libgphoto2 and ghoto2 from the source and also covers the steps for capturing pictures using Python scripts.
Recently while working on a project, I needed to install FFmpeg on a Raspberry Pi. I also required FFServer along with FFmpeg. There are two issues here, firstly a standard binary of FFMpeg is not available for Raspberry, and secondly, FFServer is no longer packaged along with FFmpeg. It was removed after version 3.4.
This post illustrates all the steps for installing FFmpeg and FFServer on Raspberry Pi. You can get a Raspberry Pi from Amazon if you don’t already have one.
First, we will install the prerequisites for the FFmpeg library. …
Disclaimer: This model should be used only for learning purposes as Covid-19 diagnosis is an ongoing research topic.
Firstly, you need to download the dataset from Kaggle. Check these steps for detailed instructions.
As a first, step download the dataset from Kaggle and create a new PyTorch dataset using the
Also, we are defining a transformer to Resize the images to 224x224 px and then converting the image to a Tensor. …
In this post, we will see how to import data to the Neo4J database from CSV files.
Recently, I worked with Graph databases for the first time and was really amazed by the capabilities it offers. Initially, I struggled to find a good resource that guides beginners to import CSV data to the Neo4J database. So I thought it might be useful to share the steps using a real-world example.
Download and install Neo4J on your machine.
Next, start Neo4J and create a new “Local Graph” named “user_sample”.
In this post, I will demonstrate how you can use custom building blocks for your deep learning model. Specifically, we will see how to use custom data generators, custom Keras layer, custom loss function, and a custom learning rate scheduler.
tf.keras.preprocessing.image.ImageDataGenerator (link) which is apt for most of the use cases but in some cases you might want to use a custom data generator. You can implement the
keras.utils.Sequence interface to define a custom generator for your problem statement.
__getitem__ function returns a batch of images and labels.
Once, you define the generator, you can create its instances for training and validation sets. …
Tensorflow 2.0 comes with a set of pre-defined ready to use datasets. It is quite easy to use and is often handy when you are just playing around with new models.
In this short post, I will show you how you can use a pre-defined Tensorflow Dataset.
Make sure that you have
pip install -q tensorflow-datasets tensorflow
In this example, we will use a small
You can visit this link to get a complete list of available datasets.
We will use the
tfds.builder function to load the dataset.
trueso that we can perform some manipulations on the data. …
In this post, I will explain how to use the Android Paged list with a boundary callback. The idea is to see an example where both the network and the local database is involved. We would be using Kotlin for the whole example.
In this post, we will be using the Android Paged List to populate a
RecyclerView with data from the DB. The DB is populated using network calls. Here’s the flow:
RecyclerViewwill observe the local DB to display the items in the list.