HyperFD and Nase Connect announced new fully autonomous inspection capabilities for the HFD01M3 dual camera payload at DJI Airworks 2019. HyperFD has developed proprietary AI algorithms which are embedded in the on-board graphics processor enabling real time computer vision.
HyperFD's CEO, Shawn Sheng has said that the initial algorithms will detect power distribution insulators and autonomously point, zoom, and capture images of the equipment. "We are thrilled to bring this capability to market. This is the culmination of months of engineering work and testing with our customers. We feel that this capability will have a great impact on the operations of utilities, bringing them large gains in efficiency" Mr. Sheng said.
Traditional drone inspections are done using a single colour camera. HyperFD disrupted this in 2017 by offering a dual colour camera payload for increased efficiency. NASE Connect CEO Andy Buck says that "this added capability continues our push to bring advanced technology to practical applications. We are excited to bring this cutting edge technology to the energy market."
The HFD01M3 is available today, and is compatible with M200, M210, M210RTK drones, for both V1 and V2.
To learn more about how you can make use of this AI enabled camera payload, contact NASE Connect today.
LiDAR has many applications in today’s world. They range from straight forward ground mapping for survey’s to creating complex 3D models of infrastructure. Whatever the use case, there are practical challenges with capturing, processing and the overall usefulness of LiDAR data.
The first limitation of many LiDAR systems is that although you can capture extremely high accuracy models, the output is just a bunch of dots. Humans visualize the world in colour. When all colour is removed from our perception, it becomes a challenge to make valuable use of the data.
This limitation has led many companies to fly two missions back to back. One which captures the LiDAR data, and another which captures colour camera data for an orthomosaic. Unfortunately, this process takes twice the amount of flying and worse, results in a significant amount of time post processing. Not only does the LiDAR data need to be processed, but so do the hundreds or thousands of pictures. Once the pictures are made into an orthomosaic, that dataset then needs to be manually aligned to the LiDAR data. This is a very challenging task to get right, and often has errors, as the two datasets were captured at different points in the air meaning there are different angles used when getting the two datasets.
Enter Polynesian Exploration’s PolyScanner. PolyScanner is a truly unique product which has eliminated the issues typically associated with capturing colourized LiDAR data. This integrated payload features a Velodyne LiDAR and couples it with a high-resolution colour camera. Not only are both tools integrated into the payload, they are also factory calibrated to be aligned, and the provided software automatically combines the datasets. Beyond this, the payload also has an integrated RTK INS/GNSS to give you accuracy that is unmatched. With PolyScanner you need only one flight to get a colourized point cloud that is geo-referenced.
To learn more, please contact firstname.lastname@example.org