Category Archives: Python

Working with hyperspectral data in ENVI BIL format using Python

For working with ENVI files I normally use GDAL as code can then be applied to different formats. However, there are a couple of limitations with GDAL when working with hyperspectral data in ENVI format:

  1. GDAL doesn’t copy every item from the header file to a new header file if they don’t fit in with the GDAL data model. Examples are FWHM and comments. Sometimes extra attributes are copied to the aux.xml file GDAL creates, these files aren’t read by ENVI or other programs based on IDL (e.g., ATCOR).
  2. For data stored Band Interleaved by Line (BIL) rather than Band Sequential (BSQ) reading and writing a band at a time is inefficient as it is necessary to keep jumping around the file

To overcome these issues NERC-ARF-DAN use their own Python functions for reading / writing header files and loading BIL files a line at a time. These functions have been tidied up and released through the NERC-ARF Tools repository on GitHub (https://github.com/pmlrsg/arsf_tools). The functions depend on NumPy.

To install them it is recommended to use the following steps:

  1. Download miniconda from http://conda.pydata.org/miniconda.html#miniconda and follow the instructions to install.
  2. Open a ‘Command Prompt’ / ‘Terminal’ window and install numpy by typing:
    conda install numpy
    
  3. Download ‘arsf_tools’ from GitHub (https://github.com/pmlrsg/arsf_tools/archive/master.zip)
  4. Unzip and within a ‘Command Prompt’ or ‘Terminal’ window navigate to the the folder using (for example):
    cd Downloads\arsf_tools-master
    
  5. Install the tools and library by typing:
    python setup.py install
    

Note, if you are using Linux you can install the arsf_binary reader from https://github.com/arsf/arsf_binaryreader which is written in C++. The ‘arsf_envi_reader’ module will import this if available as it is faster than the standard NumPy BIL reader.

If you are a UK based researcher with access to the JASMIN system the library is already installed and can be loaded using:

module load contrib/arsf/arsf_tools

An simple example of reading each line of a file, adding 1 to every band and writing back out again is:

from arsf_envi_reader import numpy_bin_reader
from arsf_envi_reader import envi_header

# Open file for output
out_file = open("out_file.bil", "w")

# Open input file
in_data = numpy_bin_reader.BilReader("in_file.bil")

for line in in_data:
out_line = line + 1
out_line.tofile(out_file)

# Copy header
envi_header.write_envi_header("out_file.bil.hdr",
                              in_data.get_hdr_dict())

# Close files
out_file.close()
in_data = None

A more advanced example is applying a Savitzky-Golay filter to each pixel. As the filter requires every band for each pixel it is efficient to work with BIL files.

For developing algorithms using spatial data, in particular multiple files it is recommended to convert the files to another band-sequential format using the ‘gdal_translate’ so they can be read using programs which use GDAL, for example RIOS or RSGISLib.

To convert files from ENVI BIL to ENVI BSQ and copy all header attributes the ‘envi_header’ module can be used after converting the interleave with GDAL. For example:

import os
import subprocess
from arsf_envi_reader import envi_header

# Set input and output image (could get with argparse)
input_image = "input.bil"
output_image = "output_bsq.bsq"

# Get interleave from file extension
output_interleave = os.path.splitext(output_image)[-1].lstrip(".")

# 1. Convert interleave
print("Converting interleave to {}".format(output_interleave.upper()))
gdal_translate_cmd = ["gdal_translate",
                      "-of", "ENVI",
                      "-co", "INTERLEAVE={}".format(output_interleave.upper())]
gdal_translate_cmd.extend([input_image, output_image])
subprocess.call(gdal_translate_cmd)

# 2. Copy header (GDAL doesn't copy all items)
# Find header files
input_header = envi_header.find_hdr_file(input_image)
output_header = envi_header.find_hdr_file(output_image)

# Read input header to dictionary
input_header_dict = envi_header.read_hdr_file(input_header)

# Change interleave
input_header_dict["interleave"] = output_interleave.upper()

# Write header (replace one generated by GDAL)
envi_header.write_envi_header(output_header, input_header_dict)

A script to batch download files using PycURL

When downloading a lot of large files (e.g., remote sensing data), it is difficult to do using a browser as you don’t want to set all the files downloading at once and sitting round waiting for one download to finish and another to start is tedious. Also you might want to download files to somewhere other than your ‘Downloads’ folder.

To make downloading files easier you can use the CURLDownloadFileList.py available on Bitbucket.

The script uses PycURL, which is available to install through conda using:

conda install pycurl

It takes a text file with a list of files to download as input, if you were downloading files using a browser you can right click on the line and select ‘Copy Link Location’ (the exact text will vary depending on the browser you are using) instead of downloading the file. Some files follow a logical pattern so you could use this to get a couple of links and then just copy and paste changing as required (e.g., the number of the file, location, date etc.,).

Once you have the list of files to download the script is run using:

python CURLDownloadFileList.py \
     --filelist ~/Desktop/JAXAFileNamesCut.txt \
     --failslist ~/Desktop/JAXAFileNamesFails.txt \
     --outputpath ~/Desktop/JAXA_2010_PALSAR/ 

Any downloads which fail will be added to the file specified by the ‘–failslist’ argument.

By default the script will check if a file has already been downloaded and won’t download it again, you can skip this check using ‘–nofilecheck’. It is also possible to set time to pause between downloads with ‘–pause’, to avoid rejection from a server when doing big downloads. For all the available options run:

python CURLDownloadFileList.py  --help

As it is running from the command line, you can set it running on one machine (e.g., a desktop at the office) and check on the progress remotely (e.g., from your laptop at home) using ssh. So the script keeps running when you you close the session you can run within GNU Screen, which is installed by default on OS X. To start it type:

screen

then type ctrl+a ctrl+d to detach the session. You can reattach using:

screen -R

to check the progress of your downloads.
Alternatively you can use tmux. However this isn’t available by default.

Plot 2D Histogram of Pixel Values from Two Images

Recently I wanted to plot the pixel values of two images against each other. I though it would be good to combine extracting the pixel values (using RIOS) and plotting the data (using matplotlib) in a single script.

The script I wrote (two_band_scatter_plot.py) is available to download from the RSGIS Scripts repository

There are three main steps:

  1. RIOS is used to iterate through the image in small blocks, for each block only pixels that have data in each band are retained.
  2. The numpy.histogram2d is used to produce a 2D histogram.
  3. The histogram is plotted using matplotlib.

The 2D histogram was based on code from http://oceanpython.org/2013/02/25/2d-histogram/. As there are a lot of points, displaying as a 2D histogram makes the information easier to see.

The simplest usage of the script is:

python two_band_scatter_plot.py \
--imageA imageA.kea \
--imageB imageB.kea \
--plot outplot.pdf 

Which will plot the first band of each image. To change the bands used, labels for the plot and scaling additional flags can be passed in.

python two_band_scatter_plot.py \
--imageA imageA.kea \
--imageB imageB.kea \
--bandA 1 --bandB 1 \
--plot outplot.pdf \
--plotMin 0 --plotMax 0.5 \ 
--labelA "13157 - 004 (HH)" \
--labelB "13157 - 006 (HH)" \

The output looks something like this:
TonziR_13157_003_vs_007_hh

The script assumes both images are the same projection and resolution, and RIOS takes care of the spatial information. It would be possible to adapt so RIOS resamples the data, see the RIOS Wiki for more details.

Convert SSURGO soil data to a Raster Attribute Table

SSURGO (Soil Survey Geographic database) provides soil information across the United States. The data is provide as Shapefiles with the mapping units. The attributes for each polygon are stored as a text files, which need to be imported into an Access database and linked with the shapefile.

An alternative for working with SSURGO data is to convert the shapefile to a raster, parse the text files and store the attributes for each mapping unit as a Raster Attribute Table (RAT).

To do this the following steps are required:

  1. Use gdal_rasterize to create a raster.
  2. Use RSGISLib to convert to a RAT.
  3. Add a column for each attribute using RIOS.

A Python script to perform these steps can be downloaded from https://github.com/MiXIL/SSURGO-Utilities.

An example of usage is:

python convertSSURGO2RAT.py --indir CA669 \
  --colname claytotal_ \
  --outformat KEA

A list of all available columns can be viewed using:

python convertSSURGO2RAT.py --printcols

To export the columns as a standard raster (using RSGISLib) pass in the ‘–raster’ flag.

SSURGO data can be downloaded from the USDA NRCS Geospatial Data Portal (http://datagateway.nrcs.usda.gov/)

Import CSV into Python using Pandas

One of the features I like about R is when you read in a CSV file into a data frame you can access columns using names from the header file. The Python Data Analysis Library (pandas) aims to provide a similar data frame structure to Python and also has a function to read a CSV. Once pandas has been installed a CSV file can be read using:

import pandas
data_df = pandas.read_csv('in_data.csv')

To get the names of the columns use:

print(data_df.columns)

And to access columns use:

colHH = data_df['colHH']

Or if the column name is a valid Python variable name:

colHH = data_df.colHH

This is only a tiny part of pandas, there are lots of features available (which I’m just getting into). One interesting one is the ability to create pivot table reports from a data frame, similar to Excel.

Recursively finding files in Python using the find command

The find command on a UNIX system is great for searching through directories for files. There are lots of option available but the simplest usage is:

find PATH -name 'SEARCH'

Combined with the subprocess module in Python, it’s easy to use this search capability within a Python script:

import subprocess

# Set up find command
findCMD = 'find . -name "*.kea"'
out = subprocess.Popen(findCMD,shell=True,stdin=subprocess.PIPE, 
                        stdout=subprocess.PIPE,stderr=subprocess.PIPE)
# Get standard out and error
(stdout, stderr) = out.communicate()

# Save found files to list
filelist = stdout.decode().split()

The above code can be used to get the output from any command line tool in Python.

Update
As noted in the comment by Sam, a more portable way of finding files is to use the os.walk function combined with fnmatch:

import os, fnmatch

inDIR = '/home/dan/'
pattern = '*kea'
fileList = []

# Walk through directory
for dName, sdName, fList in os.walk(inDIR):
    for fileName in fList:
        if fnmatch.fnmatch(fileName, pattern): # Match search string
            fileList.append(os.path.join(dName, fileName))

Reading MATLAB files with Python

If you need to read MATLAB (.mat) data files, there is a function within scipy.io which allows you to do this. Usage is simple and well explained in the tutorial:

  1. Import file:
  2. from scipy import io
    
    inMATFile = 'ssurgo_data.mat'
    soildata = io.loadmat(inMATFile)
    
  3. Get a list of keys:
  4. soildata.keys()
    
  5. Extract data to a NumPy array:
  6. soildata_varname = soildata['varname']