Convert LaTeX to Word

A big problem with writing in LaTeX is collaborating with colleagues who don’t use it. One option is to generate a Word .docx version and use the comments and track changes features in Word / LibreOffice. This does require manually copying the changes back to LaTeX so isn’t quite as nice as using latexdiff (see earlier post) but is slightly easier than adding comments to a PDF.

The best program I’ve found for converting LaTeX to Word is the open source (GPL) command line tool, pandoc (

Basic usage is quite straightforward:

pandoc latex_document.tex -o latex_document_word_version.docx

The conversion isn’t perfect, figures and tables can get a bit mangled, but it does a good job with the text.

Pandoc can convert between many different formats, including from markdown and reStructuredText (commonly used for software documentation) so it is worth having installed.

Convert Sentinel-2 Data Using GDAL

The latest version of GDAL (2.1) has a driver to read Sentinel 2 data (see Like HDF files they are read as subdatasets. Running ‘gdalinfo’ on the zipped folder or the .xml file contained within the .SAFE directory will display all the subdatasets, as well as all the metadata (so quite a lot of information!).

As with HDF5 you can pass the subdataset names to gdalinfo to get more information or gdal_translate to extract them as a separate dataset.

To make it easier to extract all the subdatasets I wrote a script ( which can be downloaded from the repository.

The script will get a list of all subdatasets using GDAL:

from osgeo import gdal
dataset = gdal.Open('S2/S2.xml', gdal.GA_ReadOnly)
subdatasets = dataset.GetSubDatasets()
dataset = None

The ones to be extracted are for the 10, 20 and 60 m resolution band groups for each UTM zone (if the file crosses multiple zones).

For each subdataset it will give an output name, replacing the EPSG code with the UTM zone and ‘:’ with ‘_’.

Then the gdal_translate command is used to create a new file for each. By default the output is KEA format, called using subprocess.

To run the script first install GDAL 2.1, the conda-forge channel has recent builds, to install them using conda:

conda create -n gdal2 -c conda-forge gdal
source activate gdal2

(If you are on Windows leave out ‘source’)

To extract all subdatasets from a zipped Sentinel 2 scene to the current directory you can then use: -o . \

The gdal_translate command used is printed to the screen.

The default output format is KEA, you can change using the ‘–of’ flag. For example to convert an unzipped scene to GeoTiff: -o . --of GTiff \

To get the extension for all supported drivers, and some creation options the ‘get_gdal_drivers’ module from arsf_dem_scripts is optionally used. You can just download this file and copy into the same directory ‘’ has been saved to. For Linux or OS X you can run:

# OS X
curl >

# Linux

Working with full waveform LiDAR data in SPDLib (part 1)

The SPD file format was designed around storing LiDAR pulses with digitised waveforms and associated points. The most recent version (3.3) has the ability to import waveform data from LAS files using LASlib, which is part of LAStools. Currently binaries are only available for Linux (although binaries for OS X are planned), they can be installed through conda.

conda install -c osgeo -c rios -c conda-forge spdlib

Windows users are advised to install using a Linux Virtual Machine. This post also has more details on installing conda.

For this example LAS 1.3 files acquired by the NERC Airborne Research Facility (NERC-ARF, previously ARSF) over Borneo using a Leica ALS50-II instrument with full waveform digitiser will be used.

Once you have registered with NEODC and applied for access to the ARSF archive these data can be downloaded from:

You can also follow through with any of the other NERC-ARF datasets or other waveform LAS files you have.

  1. Convert LAS to SPD format
    First you need a WKT file to define the projection. This step is optional but is more reliable than reading from a LAS file. For the example the projection is UTM50N, you can download a WKT file using:


    Then convert to an unsorted SPD file (UPD).

    spdtranslate --if LAS --of UPD \
                 -x LAST_RETURN \
                 -i --input_proj UTM_WGS84_Z50_N.wkt \
                 -i LDR-FW-RG13_06-2014-303-05.LAS \
                 -o LDR-FW-RG13_06-2014-303-05.spd
  2. Subset SPD file (optional)

    As full waveform processing is quite intensive it is recommended to subset the data for the purpose of running though this tutorial, you can do this using the spdsubset command.

    spdsubset --xmin 494400 --ymin 524800 \
              --xmax 494800 --ymax 525000 \
              -i LDR-FW-RG13_06-2014-303-05.spd \
              -o LDR-FW-RG13_06-2014-303-05_subset.spd
  3. Decompose waveform

    One of the limitations of discrete systems is there is are only a given number of ‘points’ recorded (normally 2 – 4) and the rest of the information is lost. As full waveform data records the entire waveform it is possible to extract more returns after data are acquired. A common approach to this ‘Gaussian Decomposition’ which involves fitting Gaussian distributions to the peaks, within SPDLib this is available as the ‘spddecomp’ command.

    spddecomp --all --noise --threshold 25 \
              -i  LDR-FW-RG13_06-2014-303-05_subset.spd \
              -o  LDR-FW-RG13_06-2014-303-05_subset_decomp.spd

    This will still take around 5 minutes to run. If you decide to decompose the full dataset after, expect it to take an hour or so.

  4. Export returns to LAS file
    The final step for this part of the tutorial is to export the returns to a LAS file, using the spdtranslate command.

    spdtranslate --if SPD --of LAS \
                 -i LDR-FW-RG13_06-2014-303-05_subset_decomp.spd \
                 -o LDR-FW-RG13_06-2014-303-05_subset_decomp.las

The next part of the tutorial will cover more tools in SPDLib for classifying ground returns and calculating LiDAR metrics.

If you have further questions about using SPDLib please contact the mailing list (details available from If you have any questions about working with NERC-ARF data contact the NERC-ARF Data Analysis Node (NERC-ARF-DAN) see or follow us on twitter: @NERC_ARF_DAN).

Create a CSV with the coordinates from geotagged photos

Recently I took a lot of photos on my phone during fieldwork (survey for the NERC-ARF LiDAR and hyperspectral calibration flights) and wanted to extract the GPS coordinates from each photo so I could load them into QGIS using the add delimited text dialogue. The aim wasn’t to have precise locations for each photo (GPS positions for each of the points surveyed were recorded separately) but to give a quick idea of the locations we’d visited before the main GPS data were processed.

I while ago (2012!) I wrote a script for this task. The script (CreateJPEGKMZ) requires pillow and the imagemagick command line tools.

It code isn’t particularly tidy (despite my recent updates) but it basically pulls out the GPS location from the EXIF tags and writes this to a CSV file. To create a KMZ a thumbnail image is created and a corresponding KML file. Both the thumbnail and KML are then zipped together to generate the KMZ which can be opened in GoogleEarth.

To use the script to write a CSV file: --outcsv gps_points.csv input_jpeg_files

To also create KMZ files for each photo: --outcsv gps_points.csv \
                 --outkmz output_kmz_files 

This was also a good lesson in the benefits of having scripts in version control and publicly available.

Scalable image segmentation using RSGISLib

To for application to very large remote sensing datasets, an approach to “Scalable image segmentation” presented in [1] using RSGISLib. In the paper a 30 m spatial resolution satellite mosaic of Australia was segmented by splitting into tiles, processing each tile on a separate node of a HPC, merging and then performing a second segmentation to remove artefacts at tile boundaries.
For a full description of the approach see section 5.3 of the open access paper (available to view online at

A version of this approach, designed to be used on a single machine, is now available in RSGISLib through the performTiledSegmentation function. The function is called in a similar way to the existing runShepherdSegmentation function.

For instructions on installing the latest version of RSGISLib see this post, skipping to section 7 if you already have Linux or OS X installed.

The code below presents an example of applying either the tiled or standard segmentation through the use of a command line flag:

#!/usr/bin/env python
Apply segmentation to image

from __future__ import print_function
import argparse
import shutil
import sys
import tempfile
from rsgislib.segmentation import tiledsegsingle
from rsgislib.segmentation import segutils

# Set values for clustering

# Set values for tiles segmentation

parser = argparse.ArgumentParser()
parser.add_argument("inputimage", nargs=1,
                    type=str, help="Input Image")
parser.add_argument("outputclumps", nargs=1,
                    type=str, help="Output clumps")
                    help="Use tiled segmentation")
args = parser.parse_args()

# Make temp directory for intermediate files
temp_dir = tempfile.mkdtemp(prefix='rsgislib_seg_')

if args.tiled:
    # If requested run tiled segmentation
                                             sampling=100, kmMaxIter=200)
    # If not run standard
                                      sampling=100, kmMaxIter=200)

Note, script edited for post – full version available from GitHub.

[1] Clewley, D.; Bunting, P.; Shepherd, J.; Gillingham, S.; Flood, N.; Dymond, J.; Lucas, R.; Armston, J.; Moghaddam, M. A Python-Based Open Source System for Geographic Object-Based Image Analysis (GEOBIA) Utilizing Raster Attribute Tables. Remote Sensing 2014, 6, 6111-6135.

Working with hyperspectral data in ENVI BIL format using Python

For working with ENVI files I normally use GDAL as code can then be applied to different formats. However, there are a couple of limitations with GDAL when working with hyperspectral data in ENVI format:

  1. GDAL doesn’t copy every item from the header file to a new header file if they don’t fit in with the GDAL data model. Examples are FWHM and comments. Sometimes extra attributes are copied to the aux.xml file GDAL creates, these files aren’t read by ENVI or other programs based on IDL (e.g., ATCOR).
  2. For data stored Band Interleaved by Line (BIL) rather than Band Sequential (BSQ) reading and writing a band at a time is inefficient as it is necessary to keep jumping around the file

To overcome these issues NERC-ARF-DAN use their own Python functions for reading / writing header files and loading BIL files a line at a time. These functions have been tidied up and released through the NERC-ARF Tools repository on GitHub ( The functions depend on NumPy.

To install them it is recommended to use the following steps:

  1. Download miniconda from and follow the instructions to install.
  2. Open a ‘Command Prompt’ / ‘Terminal’ window and install numpy by typing:
    conda install numpy
  3. Download ‘arsf_tools’ from GitHub (
  4. Unzip and within a ‘Command Prompt’ or ‘Terminal’ window navigate to the the folder using (for example):
    cd Downloads\arsf_tools-master
  5. Install the tools and library by typing:
    python install

Note, if you are using Linux you can install the arsf_binary reader from which is written in C++. The ‘arsf_envi_reader’ module will import this if available as it is faster than the standard NumPy BIL reader.

If you are a UK based researcher with access to the JASMIN system the library is already installed and can be loaded using:

module load contrib/arsf/arsf_tools

An simple example of reading each line of a file, adding 1 to every band and writing back out again is:

from arsf_envi_reader import numpy_bin_reader
from arsf_envi_reader import envi_header

# Open file for output
out_file = open("out_file.bil", "w")

# Open input file
in_data = numpy_bin_reader.BilReader("in_file.bil")

for line in in_data:
out_line = line + 1

# Copy header

# Close files
in_data = None

A more advanced example is applying a Savitzky-Golay filter to each pixel. As the filter requires every band for each pixel it is efficient to work with BIL files.

For developing algorithms using spatial data, in particular multiple files it is recommended to convert the files to another band-sequential format using the ‘gdal_translate’ so they can be read using programs which use GDAL, for example RIOS or RSGISLib.

To convert files from ENVI BIL to ENVI BSQ and copy all header attributes the ‘envi_header’ module can be used after converting the interleave with GDAL. For example:

import os
import subprocess
from arsf_envi_reader import envi_header

# Set input and output image (could get with argparse)
input_image = "input.bil"
output_image = "output_bsq.bsq"

# Get interleave from file extension
output_interleave = os.path.splitext(output_image)[-1].lstrip(".")

# 1. Convert interleave
print("Converting interleave to {}".format(output_interleave.upper()))
gdal_translate_cmd = ["gdal_translate",
                      "-of", "ENVI",
                      "-co", "INTERLEAVE={}".format(output_interleave.upper())]
gdal_translate_cmd.extend([input_image, output_image])

# 2. Copy header (GDAL doesn't copy all items)
# Find header files
input_header = envi_header.find_hdr_file(input_image)
output_header = envi_header.find_hdr_file(output_image)

# Read input header to dictionary
input_header_dict = envi_header.read_hdr_file(input_header)

# Change interleave
input_header_dict["interleave"] = output_interleave.upper()

# Write header (replace one generated by GDAL)
envi_header.write_envi_header(output_header, input_header_dict)

Useful GDAL commands for working with hyperspectral data

The Geospatial Data Abstraction Library (GDAL) contains a number of utilities for working with remote sensing data. The following are some commands which are particularly useful when working with airborne hyperspectral data.

Create subset and change format

To create a band and spatial subset of a larger hyperspectral dataset the following command can be used:

gdal_translate -of GTiff -co COMPRESS=LZW \
 -projwin 481500 766000 482500 765000 \
 -b 190 -b 40 -b 15  \
 outputs/f281013b_utm_wgs84N50_mapped_subsection.bil \
  • -of: Output Format – we are using GeoTiff file. For all available options see
  • -co: Creation Option – specifies the creation options, these are specific to the format. For GeoTiff we use compression to reduce the file size. For ENVI we can use the creation options to specify the Interleave (BSQ or BIL)
  • -projwin: A rectangle delineating an area to be extracted, in the format min X min Y max X max Y
  • -b: A given band to extract
  • final two arguments: input and output filenames

Create mosaic of multiple flightlines

Often a mosaic of all individual flightlines is required. Due to the large sizes of the mapped files this can often be time consuming. One option is to first make a ‘Virtial Raster’ using the ‘gdalbuildvrt’ command:

gdalbuildvrt -srcnodata 0 outputs/f2810_mapped_mosaic.vrt \
 outputs/f281013b_utm_wgs84N50_mapped_3band.bil \
  • -srcnodata: No data value of input lines
  • first argument: output VRT file
  • final arguments: list of input files

Rather than creating a new file a small .vrt text file is created which contains links to the individual lines.

Many programs which use GDAL (e.g., TuiView) can open this file directly. To make opening the file easier we can do some optimisation. First pre-calculate statistics using:

gdalinfo -stats outputs/f2810_mapped_mosaic.vrt
  • -stats: Calculate statistics and save to .aux.xml file
  • final argument: file name

The gdalinfo command can also be used to print information about a raster. Note for the .vrt file it shows the individual files which are included.

After creating statistics generate overviews (lower resolution copies at different resolutions) using gdaladdo:

gdaladdo -r average outputs/f2810_mapped_mosaic.vrt 2 4 8 16
  • -r average: Use average for resampling overviews
  • first argument: file name
  • numbers: overview levels to build (2x, 4x, 8x and 16x)

We can also make a real raster using gdal_translate (as shown previously).

Extract pixel values for a given location

gdallocationinfo -valonly -geoloc \
 outputs/f281013b_utm_wgs84N50_mapped_subsection.bil \
 481801 765289
  • -valonly: Only print values to the screen
  • -geoloc: Interprerate x and y values as coordinates rather than pixel and line
  • first argument: file name
  • x y: Easting and Northing of pixel to extract statistics from

This will print the values to the screen. To save as a text file we can redirect the output using ‘>’

gdallocationinfo -valonly -geoloc \
 outputs/f281013b_utm_wgs84N50_mapped_subsection.bil \
 481801 765289 > outputs/extracted_values.txt

This text file can be opened in other programs for further analysis.

This post is based on the hyperspectral practical session from the NERC Airborne Survey and Research Facility workshop held at Plymouth Marine Laboratory, March 2016. For more information about the ARSF Data Analysis Node (ARSF-DAN) and details about future training opportunities follow @ARSF_DAN on twitter. Hyperspectral data acquired by ARSF are available to download from NEODC.