Wanted: 3D models based on satellite imagery



WASHINGTON, 15 March 2016. U.S. intelligence researchers are asking industry to develop core libraries of computer 3D models the represent manmade objects like buildings, roads, walls, bridges, towers, and dams to help with military mission planning based onsatellite imagery.
Officials of the U.S. Intelligence Advanced Research Projects Agency (IARPA) in Washington will brief industry from 9 a.m. to 5 p.m. on 30 March 2016 concerning details of this program, called CORE3D. Briefings will be in the Washington, D.C. area.
U.S. intelligence experts need global situational awareness as well as military, intelligence, and humanitarian mission planning that involves timely access to geospatially accurate 3D object data, IARPA officials say.
The CORE3D program has two aims: automated ways to create timely 3D models that capitalize on spectral, textural, and dimensional information from satellite data; and automated ways to recognize and understand objects in satellite reconnaissance data.
The manmade objects that the CORE3D program will model are invariant and relatively large, such as buildings, roads, walls, bridges, towers, dams, or other static structures.

The program will use simplified 3D representation such as constructive solid geometry (CSG), where 3D shapes are built from Boolean operations on simple shape primitives such as cubes, cylinders, or spheres to fit and store the geometry of 3D models. IARPA experts will provide a core library of 3D shape primitives will be provided to all performers.
The CORE3D program focuses on wide area manmade object recognition and scene understanding. Proposed methods shall demonstrate that they can automatically recognize, tag, and update pre-defined object categories from satellite imagery.
Similar to the targets of the physical models, the object categories for functional modeling shall consist of static structures such as communication towers, airfields, power plants, water towers, lighthouses, schools, and hospitals.
IARPA researchers want industry to develop customized learning frameworks optimized for satellite imagery and multi-modal data fusion. Researchers particularly are interested in hybrid approaches that do not rely on just one computer vision or learning modality.

Researchers are interested in satellite panchromatic, multispectral, point clouds, and maps; multi-level fusion to include data level, feature level, and decision level fusion; object level segmentation and classification; point cloud generation from multi-view satellite images; representations of complex scene geometry; accurate 3D model fitting and statistical inferencing; a deep learning framework optimized for satellite imagery scene recognition; and hybrid image understanding module using deep learning, traditional, and new image-understanding algorithms.