3D Reconstruction and GraphicsAcademic contact: Prof Ebroul Izquierdo
3D Reconstruction and Recognition of rigid objects of complex installation enginesAcademic contacts: Prof Ebroul Izquierdo
Figure 2: Actual installation
Figure 1: Example of 3D model
In order to find faults in the aircraft installation engines, human subject intervention is needed in the verification phase of supporting validation/optimization of aircraft installations. Thus there exists a keen surge in automating the verification process to support the aircraft Level modeling and analysis. This can be achieved by building the accurate 3D models with semantic metadata of the installation engines by comparing with that of base CATIA/CAD installations. The project is challenging since it is stated in literature that only 85% of the objects in industrial installations can be approximated by constructive Solid Geometry primitives such as planes, spheres, cones, cylinders and toroidal surfaces. The goal is to deploy independent (vision-based) method to accelerate the convergence toward optimal system architecture that integrates safety constrains.
Figure 4: CAD model installation (2)
Figure 3: CAD model installation (1)
The overall framework is shown. The first phase of the research is generate a 3D model (or set of parameters) using multiview stereo images that includes critical steps such as multi-view sensing, calibration, stitching, and 3D model reconstruction and/or synthesis.
Figure 5: Constructive Solid Geometry
As part of multiview sensing, we captured two dataset with around 400-500 high res pictures each using MCSO and SCMO approach from a 360 degree view. In the first MCSO, the camera rotates around the object, in the second SCMO (more accurate) the object rotates and the camera remains stationary. Disparity analysis is finding the difference between two images corresponding points and it is the key part of stereopsis. The 3D Cloud points obtained using Bundler algorithm is shown.
Figure 6: Conceptual Framework Diagram
In this project, we intend to use scalable triangular mesh technique i.e. local retriangulation to represent 3D models that makes scalable meshes of the complete mesh with specific properties.
The second phase of the research is to perform surface matching and analysis between the 3D model obtained with CV methodology and the geometric installation model. The descriptions of volumes (3D objects) and 3D spatial relationships is also to be provided for cross validation in other tasks dealing with similar descriptions coming out of CATIA representations.
Figure 7: Multiview Images obtained through SCMO (Stationary Camera Moving Object) approach
Visual Word-Based CAPTCHA using 3D CharactersAcademic contacts: Prof Ebroul Izquierdo
Figure 11: 3D models created for the approach
Internet security has been an important issue since it was opened up to the general public by mid-90s. The fact that all people have an easy and quick access to the network has made this problem grow exponentially. One of the most effective methods to increase security is the use of CAPTCHAs. The primary application of CAPTCHA is to prevent malicious attacks to the systems by spammers. However, they also serve to protect vulnerable systems, such as Yahoo or Hotmail, against e-mail spam, automated posting to forums, blogs and wikis as a result of commercial interests or harassment. A word-based CAPTCHA test consists on an image that contains distorted and noisy characters or words. To solve this test, the user has to type the characters presented in the image. Usually, the distortions applied to the image are complicated enough to prevent a robot to recognize the word while allowing humans to do so.
Figure 12: Block diagram of the proposed algorithm
Our proposed algorithm randomly selects a small number of 3D characters from a database of 3D objects. An important property of such character is their delimitation by shadows. Then, it distorts and changes illumination sources and directions randomly for each character in 3D space. Next, all the selected characters are put together in a sequence. Finally, a 2D image with the sequence of characters is rendered into a background selected from another database.
The validation of the results is done by the computer vision technique Scale Invariant Feature Transform (SIFT). This method extracts distinctive invariant features from images.
Figure 14: Example of an incorrect match with SIFT
Figure 13: Example of a clear match with SIFT
The features are used to perform matching between different views of an object or a scene. This algorithm is also robust identifying clustered and occlusion objects. The recognition uses matching between the image features and a database features that have known objects.