Cart

Your cart is empty.

Couldn't find what you were looking for?

We'd be happy to help out! Get in touch using the enquiry form below.

Using AI and ML with MicaSense multispectral data

Published by Josh Spires on 02 June, 2022, updated on 02 June, 2022.

Using AI and ML with MicaSense multispectral data

AI/Machine learning algorithms usually require data from a separate RGB camera, but with the improved spatial resolution of RedEdge-P and Altum-PT, outputs have good enough GSD to run ML algorithms on them directly. MicaSense tested it this fall in a pumpkin patch.

Earlier this fall, Justin McAllister, MicaSense CTO, loaded up his kids into the car and drove them out to Swans Trail Farms, a combination pumpkin patch, apple orchard, corn maze (among other things), for a fun-filled morning. While waiting in line for the corn maze, he happened to strike up a conversation with Nate Krause, the operations manager of Swans Trail. Inevitably, this led to a discussion of possible remote-sensing applications in pumpkin production. When Nate was asked whether having an accurate assessment of the quantity and size of pumpkins in the field would be helpful, Nate gave a candid response: “That would be interesting to look at, but it wouldn’t be that useful to us, we plant the area and what happens to grow is what we have to offer,” he continued, “but some of the other bigger producers in Washington would probably find some value in that.” The conversation ended with Nate graciously inviting the MicaSense team to come out at a later date to fly the field and capture some data, that could test the theory that MicaSense multispectral cameras, specifically the RedEdge-P, could accurately quantify and categorise all of the pumpkins in the pumpkin patch.

Counting pumpkins, in particular, may be irrelevant for many of our end-users, but the project’s general principles and methods of machine learning applied to get an accurate assessment of the pumpkin patch could be used in a wide swath of remote sensing applications. The spatial resolution capabilities of the new RedEdge-P and Altum-PT, make these sensors much more compatible with current AI/Machine learning algorithms. In the past, AI/ML algorithms have required data from a separate RGB camera, but now, with the RedEdge-P and Altum-PT, the outputs have good enough GSD to run ML algorithms on them directly, and can potentially take the place of RGB cameras that have been required in the past for AI/ML.

With that, we’ll continue on to the pumpkin patch.

A few weeks after being granted permission from Swans Trail Farms, Stephen and Gabe from support, Cody from Sales, and Toph, from the Measure team, drove the 45 minutes out to Snohomish, WA to do some flights over the pumpkin patch. After the removal of some traffic cones which had been in place to block the madding crowds from flooding the pumpkins, they cruised in and parked the Prius at a convenient spot near the patch and began to unpack and prepare their equipment.

For the flight, they used the new RedEdge-P, a five-band multispectral, plus panchromatic sensor from MicaSense, paired with the DJI Matrice 300 drone. For creating the polygon-shaped mission to fly the 23 -acre field, they used the Ground Control mission planner from Measure. Once the drone, camera, and mission were set up and ready to go, they hit the launch button and flew the site at 60m AGL, with the goal of achieving 2 cm/pixel GSD in the pan-sharpened data.

After completing the flight and reviewing the CFexpress storage card to ensure the data was collected, they packed up their gear and headed back to the office in Seattle.

With the drone mission complete and data collected, it was now time to send the data over to Callum Scougal, MicaSense’s Sales Engineer, to process the data and get those pumpkins counted and sized.

For processing the data, Callum used Agisoft Metashape. Once the data had been processed and pan-sharpened, he extracted the five-layer multispectral Geotiff of 2cm/pixel and then brought it into QGIS to get an overview of the pumpkin patch. The pumpkins were semi-automatically classified using simple multispectral thresholding techniques on a small subset of the orthomosaic.

While this pixel-based approach was effective for general classification, it lacked the ability to segment individual instances of pumpkins in close proximity to each other, thus leading to inaccurate pumpkin counts across the patch. To solve this problem, Callum manually edited any clumped detections and built a deep learning model with Tensorflow using the already-generated semi-automated detections as the input training data. The model was fed 2cm pan-sharpened RGB imagery from the RedEdge-P alongside our polygon detections. The high-resolution data was critical to allow the model to learn effectively and to have the ability to distinguish between very close but separate instances of pumpkins.

In this particular example, we were interested in generating accurate counts of pumpkins and getting an estimate on the size/surface area of them. This could help us generate predictions on potential profits/losses for the crop, allow for temporal comparisons across years, determine count rates across time, as well as germination rates.

The techniques used here are versatile and easily transferred to other sectors. For example, we could perform this same analysis process for forestry stand counts, detecting early-emergent corn or other small plants of interest. We can detect features, output counts, or statistics based on the reflectance of our detected features which in turn will allow us to monitor and manage farms, forests, and our environmental resources more efficiently and effectively.

Source: MicaSense

Interested? Talk to our team

For inquiries or more information, please fill out the form below, and our team will contact you as soon as we can.

Interested? Talk to our team

For inquiries or more information, please fill out the form below, and our team will contact you as soon as we can.

Download brochure

Download technical specifications