Saturday, April 13, 2019

Interoperable autonomous mining

I was speaking to someone recently who asked me why I stopped blogging - and timely enough, the Future of Mining conference happened to record a recent short speech and published it on youtube. The video describing Epiroc Autonomous Pit Viper blasthole drills performance and the future of autonomous interoperability is below. I'm not used to summarising so much in 10 minutes, but I briefly discuss the business impact of autonomous blasthole drills, increased production, increased quality, and more consistent performance. I make the argument that autonomy is a critical first step prior to realising an integrated operation: essentially you can not achieve consistent real-time control of your fleet if your reliant on the variability of manned operators, and without the information (rich data plus semantics) produced by autonomous equipment. To achieve the objective of a zero-entry (no people) integrated mine you will need to automate all the mining equipment, and waiting for a single supplier to achieve this means delayed realisation of the business case. Collaboration of multiple suppliers enables best-of-breed and flexibility ensuring we see a fully autonomous and integrated mine sooner.

I'll start blogging again now - covering some of my backlog from the last few years on autonomous mining, but also back to some technical topics as well. Stay tuned.

Sunday, November 04, 2012

Back-substitution and inverting matricies

Matrix triangulation and back-substitution algorithms can be used in combination with gaussian elimination to solve systems of equations or to find the inverse of a matrix. I previously covered gaussian elimination, continuing on we can now solve the systems of equations using back substitution.

The matrix we had to solve was:
    1     2     1     4    13 
    0    -4     2    -5     2 
    0     0    -5  -7.5   -35 
    0     0     0    -9   -18 
First we normalise the upper-triangle matrix, by simply dividing each row such that the leading coefficient is one:
    1     2     1     4    13 
    0     1  -0.5   1.2  -0.5 
    0     0     1   1.5     7 
    0     0     0     1     2 
(this simplifies the back-substitution, but we can skip/combine this step with the back-substitution)

For back-substitution we work our way backwards from the bottom of the matrix to the top, progressively eliminating each variable. As with gaussian elimination we select a pivot row, and subtract that from the rows above it. First, we start with the last row, and subtract 1.5 times that row from the row above.
    1     2     1     4    13 
    0     1  -0.5   1.2  -0.5 
    0     0     1     0     4 <-- subtract pivot row * 1.5
    0     0     0     1     2 <-- pivot 
Similarly, we continue on to the second row, subtracting 1.2 times, and the top row, subtracting four times.
    1     2     1     0     5 <-- subtract pivot row * 4
    0     1  -0.5     0    -3 <-- subtract pivot row * 1.2
    0     0     1     0     4 
    0     0     0     1     2 <-- pivot
Again, we repeat the process for the third column:
    1     2     0     0     1 
    0     1     0     0    -1 
    0     0     1     0     4 <-- pivot
    0     0     0     1     2 
And finally, the second column:
    1     0     0     0     3 
    0     1     0     0    -1 <-- pivot
    0     0     1     0     4 
    0     0     0     1     2 
Now we have our solution to the system of equations from our original gaussian elimination problem.
a = 3, b = -1, c = 4 and d = 2.
In words/pseudo-code, the process is:
  • Pivot through all the rows, starting from the bottom to the top
  • For each row above the pivot, calculate how many times we need to subtract the pivot row from this row.
  • For each element in the row, subtract the corresponding element from the pivot row, multiplied by the value above.
In code:
for (int p=n-1;p>0;p--) { //pivot backwards through all the rows
        for (int r=p-1;r>=0;r--) { //for each row above the pivot
            float multiple = mat[r][p] / mat[p][p]; //how many multiples of the pivot row do we need (to subtract)?
            for (int c=p-1;c<m;c++) {
                mat[r][c] = mat[r][c] - mat[p][c]*multiple; //subtract the pivot row element (multiple times)
            }
        }
    }
(complete code here)

This process can be applied to find the inverse of a general matrix. Beginning with any matrix we want to invert, we augment it with the identity matrix. For example:
    2     4    -2     1     0     0 
    4     9    -3     0     1     0 
   -2    -3     7     0     0     1 
Now we can apply gaussian elimination to generate:
    2     4    -2     1     0     0 
    0     1     1    -2     1     0 
    0     0     4     3    -1     1 
The normalise the upper triangle to get:
    1     2    -1   0.5     0     0 
    0     1     1    -2     1     0 
    0     0     1  0.75 -0.25  0.25 
And finally, back-substitution to get our solved inverse:
    1     0     0   6.8  -2.8  0.75 
    0     1     0  -2.8   1.2 -0.25 
    0     0     1  0.75 -0.25  0.25 
In this entire discussion I have left out ill-conditioned and singular matrices, but I'll leave modifying the code for that as an exercise for the reader.

Gaussian Elimination

Gaussian Elimination is an elementary transformation that converts a matrix into a triangle, or row-reduced echelon form (RREF). It forms the basis of a number of operations in linear algebra to solve systems of equations, invert matrices, and minimize systems of equations among other things (I'll cover these in later posts). The Gaussian Elimination algorithm itself is straight-forward (you probably learnt it in high school). Given a system of equations, e.g.
  a + 2b +  c + 4d = 13
 2a +      4c + 3d = 28
 4a + 2b + 2c +  d = 20
-3a +  b + 3c + 2d = 6
We can form an augmented matrix to represent it, and use Gaussian elimination to solve it. The goal is to produce a triangle-matrix representation, so that we can solve the equations by back-substitution. In other words, we want to (eventually) have one row represent each variable, and for all other rows, that variable should be zero. (i.e. solved). Gaussian elimination takes us part of the way there by giving us a set of equations with a starting point which we can then later solve.

Representing the above equations as a matrix, we have:
    1     2     1     4    13 
    2     0     4     3    28 
    4     2     2     1    20 
   -3     1     3     2     6 
The first step is to select a pivot row, which we can use to eliminate/reduce the other rows. When we eliminate the other rows, we want the that variables value to be 0. In this example, we pick the first row, and then subtract that twice from the row below, to ensure that the row below will have zero a's.
    1     2     1     4    13 <-- pivot
    0    -4     2    -5     2 <-- subtract pivot row * 2
    4     2     2     1    20 
   -3     1     3     2     6 
Likewise, four times the third tow, and negative three times the final row.
    1     2     1     4    13 <-- pivot
    0    -4     2    -5     2 
    0    -6    -2   -15   -32 <-- subtract pivot row * 4
    0     7     6    14    45 <-- subtract pivot row * -3
Great. Our first variable (a) has been eliminated. We now repeat this step, starting from the second row, with the variable 'b'. We don't want to use the first row, as we want to preserve that row's representation of the 'a' variable.
    1     2     1     4    13 
    0    -4     2    -5     2 <-- pivot
    0     0    -5  -7.5   -35 <-- subtract pivot row * 1.5
    0     0   9.5   5.2    48 <-- subtract pivot row * -1.75
Now, we repeat the process again, starting from the third row.
    1     2     1     4    13 
    0    -4     2    -5     2 
    0     0    -5  -7.5   -35 <-- pivot
    0     0     0    -9   -18 <-- subtract pivot row * -1.9
Done. In pseudo-code/words, the algorithm is:
  • For each row (except the last), select a pivot. (In my example, I just take the first available row each time)
  • For each row that is below the pivot, calculate the number of times we need to subtract the row (i.e. divide)
  • For each element in this row, subtract the corresponding element in the pivot row, multiplied by the value we calculated above.
The code to achieve this is:
//input a m (col) by n (row) matrix ('mat')
    //p is the pivot - which row we will use to eliminate
    for (int p=0;p<n-1;p++) { //pivot through all the rows
        for (int r=p+1; r < n; r++) { //for each row that isn't the pivot
            float multiple = mat[r][p] / mat[p][p]; //how many multiples of the pivot row do we need (to eliminate this row)?
            for (int c = 0; c<m; c++) { //for each element in this row
                mat[r][c] = mat[r][c] - mat[p][c]*multiple; //subtract the pivot row element (multiple times)
            }
             
        }
    }
(full code here) Next time, we continue on to solve the equations - available here! (2,4,-1,3)

Sunday, September 30, 2012

Mining Robotics: An overview survey

Robotics is typically associated with manufacturing robotics (e.g. PUMA arm), military robotics (e.g. Predator UAV), and more recently consumer robots (e.g. Roomba), medical/healthcare (e.g. Da Vinci) and the automotive industry (e.g. driverless cars). Not many are aware of the prevalence of robotics in the mining industry and the steps the industry has taken towards automation and autonomous robots.
Autonomous mining

The mining industry is a world-leader in autonomy, for example Rio Tinto's Western Australia operations has the worlds largest fleet of autonomous vehicles (150 autonomous trucks) - significantly larger than any operational system in the military. Rio's Western Australia operations are all controlled from a operation centre, which controls 40 mines, 30 pits, trains, power stations, and ports all based thousands of kilometres away. In terms of data, the WA system generates around 2.4 terabytes per minute of data. There is quite a lot of intelligence and innovation involved.

Overall, mining can be broadly broken up into a few key phases:
Mining process
  • Exploration, assessment and planning. In this phase, new resources are identified and a new mine site is designed and constructed.
  • Drill and blast, material is extracted from the ore.
  • Load and haul, material is taken from the point of extraction to the processing plant.
  • Processing, where the material is converted/crushed into a more useful (sellable) form.
  • Transportation, where the product is loaded and transported, usually via rail to a port and then on to a ship to its final destination.
  • Stockpiling, occurs at various points in varying quantities where appropriate.
At each of these steps some kind of machine is involved, and I'll give you a brief overview of the machines and some of the relevant research or commercial automation systems available.

Exploration and remote sensing is a massive research area in itself in other industries, and mining is no different. UAV's are seeing use in aerial surveys on mine sites, with large data sets fusing visual (photogramatery), infrared, LIDAR, InSAR, gradiometry, seismic and other geodesic measurements.
West Angelas mine LIDAR scan
On the ground new sensor fusion systems are being developed to classify the mine and ore structure and to identify the richest ore deposits. Combining all this data into an overall mine model is a difficult machine learning task. The Rio Tinto Center for Mine Automation are doing active research in this field, and the Gatewing X100 is an example of an UAV used for mapping in mining.


Drill and blasting is a mining-specific operation and there has been significant advancement in robotics in this area due to the operational hazards involved with this line of work. Robots can accurately drill holes that won't collapse and are easy to load, and Atlas CopCo and Flanders both have commercial automation systems for drilling which are well on their way to delivering autonomous drill rigs in the near future (trial drilling systems have been in use on production sites since 2008). Atlas Copco first started their work in automated drill rigs in the 1980's and now has over 2,500 machines running their control system technology.

The load and haul stage is perhaps the most interesting as it is the first area where autonomous vehicles are used in regular production environments. Whilst autonomous loading is still an area of research (See these CSIRO projects on dragline and shovel loading automation), there are plenty of commercial automation systems for haul vehicles.
This includes CAT Minestar Command, Atlas-Copco Scooptram, Sandvik Automine, Komatsu Frontrunner. Mineware provide shovel and dragline automation systems, with LIDAR systems that build digital terrain maps on the fly. Autonomous Solutions has a number of autonomous vehicles, including trucks and dozers.
Continuous miners and long wall mining have seen multiple automation systems including commercial systems from Eickhoff and CAT. Excavators are no stranger to automation, CMU automated excavators and truck dumping back in the late 90's, and work is ongoing at PWRI in Japan and Hyundai research. The range of commercially available autonomous mining vehicles put military UGV's and automotive companies to shame.


Transmin Rocklogic
Processing plants have been fully automated, although for many metals, (e.g. iron ore), there isn't much too the process in the first place. Companies such as Metso have fully automated crushers, conveyors, and also include computer vision systems to identify and classify rocks/froth/bubbles, etc. FLSmidth and Calibre Transmin have developed automation systems for rock breakers are available allowing the rockbreaker to automatically park and deploy. In-Pit Crushing and Conveying (IPCC) systems allow parts of the plant to be mobile, and even these systems have been largely automated by companies such as Sandvik.

Autonomous train
Transporting material from the mine is usually performed by a train, and autonomous trains have been around for a while. In fact, LKAB have been running driverless trains since the 1970's. The main difference in modern mining applications being that the goal is now fully autonomous operation, and that the trains can stretch many kilometres in length, making control a more difficult problem. Major miners such as Rio Tinto are automating the trains in Western Australia, with companies such as Ansaldo STS and New York Air Brakes providing the technology.

Finally, with stockpiling Stacker-Reclaimers have been automated, with companies such as ThyssenKrupp and iSAM leading the way


Rio Tinto - Remote Operation Center
Overall there are a large amount of automated and autonomous mining equipment available, and projects such as Rio Tinto's mine of the future at West Angelas and Yandicoogina sites, Vale's Carajas Serra Sul S11D site and Nautilus's Solwara underwater mining are all pushing towards fully autonomous sites where we may see no humans involved in operating future mine sites.


So if you want to find out more about robotics and automation research in mining there are a few great places to start:
The future of mining is autonomous robots, and we are well on our way!

Sunday, August 05, 2012

Programming links

Well overdue for a catchup post on the non-graphics programming side of things, so here we go:

Tuesday, July 31, 2012

Mid year point

Well, its a bit over the mid year point, and the blog posts have been lower than usual. There are a number of things that I will do longer posts on, which have happened in the last six months:
  • The WAMbot Journal of Field Robotics article was accepted and published, which has been the subject of a number of previous posts on MAGIC2010. I've put together a few posts on the navigation system (A*, Elastic bands, DWA), but still nothing on the system architecture, hardware, exploration, AI, HMI, comms, SLAM, and overall experiences. So plenty of material left to go.
  • A paper on the Navigation system has been accepted for publication.
  • A paper on using Physics Abstraction Layer for evolving robot control programs has been accepted for publication.
  • I finally uploaded the code to ImprovCV and SubSim.
  • I gave a guest lecture on realtime raytracing with WebGL, and another on intelligent systems and automation in mining
  • I've been doing a little bit of HTML5 and three.js work, which hopefully will turn into a few posts
  • As per usual, I've been collecting a large list of interesting links from around the web, that will form a number of catchup posts.
Hopefully more from me soon...

Thursday, June 07, 2012

GPU Technology Conference 2012

nVidia's GPU Technology Conference is over, and a number of presentation slides have been uploaded. There were a quite a few interesting talks relating to graphics, robotics and simulation:
  • Simon Green from nVidia and Christopher Horvath from Pixar presented 'Flame On: Real-Time Fire Simulation for Video Games'. It starts with a recent history of research on CG fluid systems, and gives five tips on better looking fire: 1. Get the colors right (e.g. radiation model), 2. Use high quality advection (not just bilinear filtering), 3. Post process with glow and motion blur. 4. Add noise. 5. Add light scattering and embers. They then go into more detail on Tip #1 looking at the physics behind the black-body radiation in a fire, and the color spectrum.
  • Elmar Westphal of PGI/JCNS-TA Scientific IT-Systems presented 'Multiparticle Collision Dynamics on one or more GPUs', about multiparticle collision dynamics GPU code. He starts by explaining the overall algorithm, and explaining step-by-step what performs well on the GPU. Specific GPU optimisations explained include spatial subdivision lists, reordering particles in memory, hash collisions, and finally dividing workload between multiple GPU's. An interesting read.
  • Michal Januszewski from the University of Silesia in Katowice introduces 'Sailfish: Lattice Boltzmann Fluid Simulations with GPUs and Python'. He explains lattice boltzmann fluid simulation, and some of the different configurations of lattice connectivity and collision operators. Moves into code generation examples, and gives a brief explanation of how the GPU implementation works.
  • Nikos Sismanis, Nikos Pitsianis and Xiaobai Sun (Aristotle University, Duke University) cover 'Efficient k-NN Search Algorithms on GPUs'. Starts with an overview of sorting and K-Nearest Neighbour (KNN) search algorithm solutions, including ANN (approximate NN) and lshkit and moves into results including a comparison of thrust::sort with Truncated Bitonic sort. Software is available at http://autogpu.ee.auth.gr/.
  • Thomas True of nVidia explains 'Best Practices in GPU-Based Video Processing' and covers overlapping copy-to-host and copy-to-device operations, and an example of processing bayer pattern images.
  • Scott Rostrup, Shweta Srivastava, and Kishore Singhal from Synopsys Inc. explain 'Tree Accumulations on GPU' using parallel scatter, parallel reduce and parallel scan algorithms.
  • Wil Braithwaite from nVidia presents an interesting talk on 'Interacting with Huge Particle Simulations in Maya using the GPU'. He begins with a brief runthrough of the workings of the CUDA SPH example, and then moves onto the particle system including Maya's body forces (uniform, radial, vortex), shape representations (implicit, covex hull, signed distance fields, displacement maps), collision response, SPH equations, and finally data transfer. Ends with a brief overview of rendering the particles in screen space. Neat.
  • David McAllister and James Bigler (nVidia) cover the OptiX internals in 'OptiX Out-of-Core and CPU Rendering' including PTX code generation and optimisation, and converting the OptiX backend to support CPU's via Ocelot and LLVM. An interesting result, LLVM does better at optimising "megafunctions" than small functions, but not entirely unexpected given how LLVM works. The presentation finishes with an overview of paging and a tip on bounding volume heirarchies. Good to see Ocelot in the mainstream.
  • Eric Enderton and Morgan McGuire from nVidia explain 'Stochastic Rasterization' (ala 'screen door transparency' rendering) via MSAA for motion blur, depth of field and order-independent transparency, by using a geometry shader to bound the shape and motion of each tri in screen space, and setting up the MSAA masks. Nice.
  • Cliff Woolley presents 'Profiling and Tuning OpenACC Code' (by adding pragmas to C / Fortran code, ala OpenMP) using an example of Jacobi iteration, and there were a number of other talks on the topic.
  • Christopher Bergström introduced 'PathScale ENZO' the alternative to CUDA and OpenCL.
  • Phillip Miller from nVidia did an broad coverage of 'GPU Ray Tracing'. He starts with a myths and claimed facts on GPU raytracing, highlights some commercial GPU raytracers (and the open source OpenCL LuxRenderer) and goes into some details that are better explained in the OptiX Out-of-Core presentation.
  • Phillip Miller follows with 'Advanced Rendering Solutions' where he takes a look at nVidia's iray, and where they believe they can introduce new capabilities for design studios and find a middle ground with re-lighting and physcially based rendering.
  • Peter Messmer presents 'CUDA Libraries and Ecosystem Overview', where he provides an overview of the linear algebra cuBLAS and cuSPARSE libraries performance, then moves to signal processing with cuFFT and NPP/VSIP for image processing, next is random numbers via cuRAND and finally ties things up with Thrust.
  • Jeremie Papon and Alexey Abramov discuss the 'Oculus real-time modular cognitive visual system' including GPU accelerated stereo disparity matching, likelihood maps and image segmentation with a parallel metropolis algorithm.
  • Jérôme Graindorge and Julien Houssay from Alyotech present 'Real Time GPU-Based Maritime Scenes Simulation' beginning with ocean simulation and rendering from FFT based wave simulation using HF and LF heightmap components. They then cover rendering the mesh, scene illumination and tone mapping, and a sneak peak at boat interaction. The ocean simulation video is neat.
  • Dan Negrut from the Simulation-Based Engineering Lab at the University of Wisconsin–Madison gives an overview of the labs multibody dynamics work in 'From Sand Dynamics to Tank Dynamics' including friction, compliant bodies, multi-physics (fluid/solid interactions), SPH, GPU solution to the cone complementary problem, ellipsoid-ellipsoid CCD, multi-CPU simulation, and finally vehicle track simulation in sand. Wow. Code is available on the Simulation-Based Engineering Lab website.
  • Max Rietmann of USI Lugano looks at seismology (earthquake simulation) in 'Faster Finite Elements for Wave Propagation Codes' and describes parallising FEM methods for GPUs in SPECFEM3D.
  • Dustin Franklin from GE introduces GE's MilSpec ruggedised Kepler-based GPU solutions and Concurrent Redhawk6 in 'Sensor Processing with Rugged Kepler GPUs'. Looks at some example applications including hyperspectral imaging, mosaicing, 360 degree vision, synthetic aperture radar processing, and space-time adaptive processing for moving target identification.
  • Graham Sanborn of FunctionBay presents 'Particle Dynamics with MBD and FEA Using CUDA' and gives a brief overview of their combined CPU/GPU multi-body FEA system and briefly describes the contact, contact force, and integration steps.
  • Ritesh Patel and Jason Mak of University of California-Davis cover the Burrows-Wheeler Transform, Move-to-Front Transform and Huffman Coding in 'Lossless Data Compression on GPUs'. They find merge sort for BWT performs best on the GPU, explain the parallel MTF transform and Huffman in illustrative detail and tie things up with benchmarks, unfortunately GPU is 2.78x slower than CPU.
  • Nikolai Sakharnykh and Nikolay Markovskiy from NVIDIA provide an indepth explanation of their GPU implementation of solving ADI with tridiagonal systems in '3D ADI Method for Fluid Simulation on Multiple GPUs'.
  • Enrico Mastrostefano, Massimo Bernaschi, and Massimiliano Fatica investigate breadth first search in 'Large Graph on multi-GPUs' and describe how best to parallelise it across multiple GPU's by using adjacency lists and level frontiers to minimise the data exchange.
  • Bob Zigon from Beckman Coulter presents '1024 bit Parallel Rational Arithmetic Operators for the GPU' and covers exact 1024 bit rational arithmetic (add,sub,mul,div) for the GPU. Get the 1024 bit arithmetic code here.
  • Roman Sokolov and Andrei Tchouprakov of D4D Technologies discuss 'Warped parallel nearest neighbor searches using kd-trees' where they take a SIMD style approach by grouping tree searches via voting (ballot)
  • David Luebke from nVidia takes a broad look at CG in 'Computational Graphics: An Overview of Graphics Research @ NVIDIA' and provides an overview of research which is featured in a number of previous talks and other GTC talks including edge aware shading, ambient occlusion via volumes and raycasting, stochastic rendering, improved image sampling and reconstruction, global illumination, and CUDA based rasterization.
  • Johanna Beyer and Markus Hadwiger from King Abdullah University of Science and Technology discuss 'Terascale Volume Visualization in Neuroscience' where each cubic mm of the brain scanned with an electron microscope generates 800 tereabytes of data. The idea here is to leverage the virtual memory manager to do all the intelligent caching work, rather than a specialised spatial datastructure for the volume rendering.
  • Mark Kilgard introduces the NV_path_rendering extension in 'GPU-Accelerated Path Rendering', and demonstrates using the GPU to render PDF, flash, clipart, etc. Contains some sample code.
  • Janusz Będkowski from the Warsaw University of Technology presented 'Parallel Computing In Mobile Robotics For RISE' a full GPGPU solution for processing mobile robot laser scan data through to navigation. Starts with data registration into a decomposed grid which is then used for scan matching with point-to-point Iterative Closest Point. Next is estimating surface normals using principle component analysis, demonstrated on velodyne datasets. This is used to achieve point-to-plane ICP and he demonstrates a 6D SLAM loop-closure. Finishes it all off with a simple gradient based GPU path planner.
Note that in recent days more presentation PDF's have been uploaded so there is still plenty to look through, and with all the content it's difficult to look through it all - take a look yourself! I'll leave you with a video from the GTC 2012 keynote on rendering colliding galaxies: