Your cart is empty.

Couldn't find what you were looking for?

We'd be happy to help out! Get in touch using the enquiry form below.

Building indoor 3D models

Published by Josh Spires on 23 April, 2020, updated on 23 April, 2021.

Building indoor 3D models

Part or all of this article originally appeared on Read the original article here.

I ) Capturing data with Elios 2

When approaching a discipline like photogrammetry, which is an art on top of being a complex science, it is highly appreciated to jump start the initial difficulties and learn the tricks that pioneers discovered through a lot of trials and errors.

The purpose of this blog post is to explain, step by step, how to use the Flyability Elios 2 for the acquisition of appropriate data for indoor photogrammetry. If you are familiar with outdoor photogrammetry, you will still enjoy this blog post as indoor photogrammetry slightly differs from outdoor photogrammetry for two simple reasons: the absence of GPS signal and the absence or limited amount of light. We are sharing with you field-proven pieces of advice to ensure that you will produce the right image quality and follow the right flight trajectory to be successful with indoor photogrammetry.


A ) Image quality

The first step is to extract video frames and save them as images. Open your video in Inspector, and use the tool “Export frames as images” under the “Export” menu.

Photogrammetry in GPS-denied environments relies solely on visual information. The image quality is therefore of paramount importance. Here is how to ensure a suitable image quality with Elios 2.

– Set the camera resolution to 4K. We recommend to capture data with the highest pixel resolution, and to down scale the images if you need to shorten the processing time. As photogrammetry works on images, Inspector – Elios 2 companion software – allows to extract video frames and save them as images.
– Set the lighting to the maximum intensity. At take-off the lighting automatically increases to the maximum intensity; do not reduce it.
– Keep the view unobstructed. Do not pitch the camera below -30° to avoid obstruction from the cage.
– Ensure correct image exposure. In general, set the EV compensation to 0. In large assets, stay close to the walls and face towards them. If you turn around and face the void, the image is likely to be underexposed (dark).
– Avoid high ISO values to limit the image noise. High ISO values may significantly harm the photogrammetry process. To reduce the ISO value, fly closer to the walls or object. The camera will automatically decrease the ISO when the scene is better illuminated.
– Avoid long exposure time to limit the motion blur. From our experience, exposure time longer than 1/60 may be a problem. Similarly to ISO, the exposure time is dictated by the amount of light received by the camera. To shorten the exposure time, fly closer to the objects or walls.
– Adapt your speed. Motion blur is proportional to optic flow – the speed at which objects move in the image. To reduce motion blur, fly slowly. Avoid rotations (yaw/heading and camera pitch) as much as possible, as they generate a high optic flow.
 Note that the Elios 2 camera uses a fixed focal length, as required for photogrammetry.

B ) Flight trajectory

The flight trajectory must be adapted to the asset geometry, the area to be covered, and the expected level of details. Flying a good trajectory requires a high level of piloting skills.

– Observe your environment. Fly a first battery to explore the asset and find landmarks that you can easily recognize. Determine the area that you want to map. Decide where to start and where to stop, what obstacles you may encounter, and what landmarks can help you find your way.
– Adopt a grid pattern, if possible. With this pattern, you will fly either horizontal or vertical lines. Read more about image acquisition plans on Pix4D’s support site. To choose between horizontal and vertical lines, observe the area to be mapped and try to find linear features that can guide you. These could be welds or joints, or bolts that lay on a line, or any object that repeats and form a line. Such features will help you stay aligned on your grid pattern.
– Choose the distance to the wall (or the floor, or the ceiling), considering two aspects:
* The desired level of details, expressed by the ground sampling distance (GSD). The GSD is measured in millimeters per pixels (mm/px); the closer you fly, the lower is the GSD and the more details will be reproduced in the 3D model.
* The distance between the linear features that can guide you. If you fly too close and the features are far apart, you may not see them anymore and you risk deviating from the grid pattern.
– Fly at a constant distance from the walls or objects. It is easier for the photogrammetry software to match images that have the same scale (where the same object appears with the same size). In general, matching will fail for images where the same object is observed at scales that differ by more than a factor 2. See Pix4D’s article on flight height variations.
– Ensure overlap. Frontal overlap (in the direction of the drone motion) can be adjusted when you extract images from the video. Lateral overlap is determined by the distance to the wall and the distance between flight lines. We recommend a 50% lateral overlap.
– Ensure visual continuity. Image acquisition plans (such as the grid pattern) are nice concepts, but you will see their limitations very quickly if you fly in a complex asset. Because of the asset geometry and the unexpected conditions that you may face, you may improvise a “free flight plan”. Nevertheless, each image used for photogrammetry must overlap with at least 3-4 other images (previous/next images). Keep in mind the following:
* Avoid quick rotations that would make you face a complete new scene within a second
* Avoid passing through manholes or very close to objects that can suddenly hide or reveal a complete new scene.
– Close the loop. A loop is made when you revisit a place that you already visited earlier in the flight. If you shoot images that are very similar to the previously recorded images, the photogrammetry software can match them and correct for any drift that may have accumulated over time. For this, make sure that when you close the loop, the same objects are seen from the same angle and distance. Try to determine in advance (during the observation flight) how you will close the loop. It may be a particular landmark that is easily recognizable and central in the area that you want to map. Sometimes, loop closure is made easy by the shape of the asset. Also, you can often use the entry / exit point for loop closure.

II ) Processing data with Elios 2

Now that you know why one might want to use photogrammetry to build 3D models of indoor spaces and how to acquire appropriate data with Elios 2 to build 3D models, you are ready to process these data to build your first 3D model. In this blog post we will review how you can use Inspector to prepare your dataset to be processed with Pix4Dmapper or another photogrammetry processing software. We will then go step by step through the process to follow to build your first 3D model with Pix4Dmapper.

To follow this blog post you will need an Elios 2 dataset, Inspector, and Pix4Dmapper. If you don’t yet have the required hardware or pieces of software, here is how to download all the material:

– Download an Elios 2 dataset
– Download Inspector
– Get a trial of Pix4d Mapper

A ) Preparing the data with Inspector

The first step is to extract video frames and save them as images. Open your video in Inspector, and use the tool “Export frames as images” under the “Export” menu.

This tool allows to select the start and end point for the frame extraction, and the frequency at which frames will be extracted. The video being recorded with 30 frames per second (fps), you will get one frame per second if you choose “one image every 30 frames”. This is often a good frequency to start with. If you were flying at high speed, or if the images contain few visual features, you may decide to increase the number of images by choosing “one image every 15 frames” for example. But keep in mind that a higher number of images result in a longer processing time!

For Pix4D users, we provide two processing templates that will set all the recommended processing parameters for you. If you check the corresponding box, the template file will be saved next to your images.

  1. Fast processing: recommended for datasets with a lot of visual features and generally high overlap. The images are downscaled for faster processing. The resulting point cloud has less points.
  2. Robust processing: recommended for datasets with less visual features, or sub-optimal trajectory (low overlap). The matching strategies are more robust but the processing is longer. The images are kept at full resolution for the point cloud densification, resulting in a denser point cloud.

B ) Processing with Pix4D


When you create a new Pix4D project and add the video frames extracted by Inspector, the software will recognize them (tags in the image exif) and automatically select the correct camera model: Elios2_2.7_3840x2160

If you extracted the images with another tool than Inspector, they will not be recognized and you will have to manually select the camera model from a dropdown menu. Note that the Elios 2 camera model is only available for the 4K format. Pix4D will not show it if you are using images with a different format.


You can import a processing options template (.tmpl) following these instructions. To generate the template file from Inspector, simply select the corresponding checkbox in the Export frames window. After the importation, the template is saved in Pix4D and you can use it for further projects.


You can process image of several flights together in order to create larger models and localize all data on the same model. Projects with 2’000 images (about 4 flights with 1 image per second) are very well handled by Pix4D. The processing time depends on your hardware and the processing options (template) that you choose.

Remember that photogrammetry in a GPS denied environment relies solely on the visual information of the images. When processing several flights together, it is crucial that each flight contains images that are very similar to the images of other flights, so that the flights can be connected to each other. Read again how to acquire appropriate data with Elios 2 to build 3D models and in particular the last point on loop closure.

C ) Using another photogrammetry software

Using a different photogrammetry software is also possible, if it accepts images that are not geotagged (images with no localization information).

Note that you may have to select a different camera model, with different parameters. Many photogrammetry software allow to specify an approximate value for the main camera parameters, and the software will optimize the model during the processing. If you follow such procedure, make sure that you use an easy dataset, to start with. It could be beneficial to record a video specifically for this purpose:

– Outdoor, with good lighting conditions
– Environment with many visual features
– Flying several lines with high overlap

Bellow are the main characteristics of the Elios 2 camera:

Image resolution in 4K format (pixels) 3840 x 2160
Focal length 2.71 mm
Sensor ship size 7.564 mm (H) x 5.476 mm (V)
Active sensor size in 4K format 5.952 mm (H) x 3.348 mm (V)
Pixel size 1.55 μm (H) x 1.55 μm (V)

The software may allow you to fine-tune some processing parameters, such as the image resolution or the matching strategy. Note that these parameters may have an important impact on the processing time and the quality of the result. Please refer to the documentation of your software.


D ) Referencing the model and taking measurements

By default the 3D model will not have a correct scale and orientation. Since the images are not geotagged, it is required to provide ground control points and/or scale and orientation constraint if you want to scale, orient, and reference the project correctly. Scaling the project is mandatory if you want to take measurements. Giving the right orientation may help for visualisation, and referencing the model in a given coordinate system allows to display it together with other models and geodata.

This section assumes that you are using Pix4D, but the steps explained here are also found in other photogrammetry software.

After the first processing step (position and orientation of the cameras), you already get a quality report that indicates the number of images that could be calibrated. It also gives other indications on the quality of the results.

By looking at the 3D view, you should recognize the shape of your asset. If a low number of images are calibrated, or if the model is obviously distorted or inconsistent, you may change the processing options for more robust parameters, and start again the first processing step. You can also decide to extract more images from the video (for example two frames per second – one every 15 frames) and start a new project.

At this stage you can add:

– Scale constraints. If you know the length of some objects that you can identify on images, you can use them to scale the project. Read this article to learn more about scale constraints in Pix4D.
– Orientation constraints. If the model is not upright, it may be difficult to visualize it. Adding a single orientation constraint to define the vertical axis is often useful. You simply need to identify, on the model or in the images, two points that should form a vertical line, such as the corners of a room, a vertical weld along a wall, a vertical pipe, the frame of a door, etc. This article explains how to set an orientation constraint in Pix4D.
– Ground control points (GCPs). To georeference your model in a given coordinate system, you need to add ground control points. These are points for which you know the precise 3D coordinates in your coordinate system. They are typically measured with a GPS or a total station. Learn more about GCPs in this excellent article made by Pix4D.

Once you added these elements, you need to reoptimize the project – a quick process that will recompute the positions of the images. Then, you can continue with Step 2 (point cloud densification and 3D mesh).

Note that Step 3 (DSM, Orthomosaic and Index) is only needed if you need these specific outputs.

Interested? Talk to our team

For inquiries or more information, please fill out the form below, and our team will contact you as soon as we can.

Interested? Talk to our team

For inquiries or more information, please fill out the form below, and our team will contact you as soon as we can.

Download brochure

Download technical specifications