Tag Archives: Data Handling

Working with hyperspectral data in ENVI BIL format using Python

For working with ENVI files I normally use GDAL as code can then be applied to different formats. However, there are a couple of limitations with GDAL when working with hyperspectral data in ENVI format:

  1. GDAL doesn’t copy every item from the header file to a new header file if they don’t fit in with the GDAL data model. Examples are FWHM and comments. Sometimes extra attributes are copied to the aux.xml file GDAL creates, these files aren’t read by ENVI or other programs based on IDL (e.g., ATCOR).
  2. For data stored Band Interleaved by Line (BIL) rather than Band Sequential (BSQ) reading and writing a band at a time is inefficient as it is necessary to keep jumping around the file

To overcome these issues NERC-ARF-DAN use their own Python functions for reading / writing header files and loading BIL files a line at a time. These functions have been tidied up and released through the NERC-ARF Tools repository on GitHub (https://github.com/pmlrsg/arsf_tools). The functions depend on NumPy.

To install them it is recommended to use the following steps:

  1. Download miniconda from http://conda.pydata.org/miniconda.html#miniconda and follow the instructions to install.
  2. Open a ‘Command Prompt’ / ‘Terminal’ window and install numpy by typing:
    conda install numpy
    
  3. Download ‘arsf_tools’ from GitHub (https://github.com/pmlrsg/arsf_tools/archive/master.zip)
  4. Unzip and within a ‘Command Prompt’ or ‘Terminal’ window navigate to the the folder using (for example):
    cd Downloads\arsf_tools-master
    
  5. Install the tools and library by typing:
    python setup.py install
    

Note, if you are using Linux you can install the arsf_binary reader from https://github.com/arsf/arsf_binaryreader which is written in C++. The ‘arsf_envi_reader’ module will import this if available as it is faster than the standard NumPy BIL reader.

If you are a UK based researcher with access to the JASMIN system the library is already installed and can be loaded using:

module load contrib/arsf/arsf_tools

An simple example of reading each line of a file, adding 1 to every band and writing back out again is:

from arsf_envi_reader import numpy_bin_reader
from arsf_envi_reader import envi_header

# Open file for output
out_file = open("out_file.bil", "w")

# Open input file
in_data = numpy_bin_reader.BilReader("in_file.bil")

for line in in_data:
out_line = line + 1
out_line.tofile(out_file)

# Copy header
envi_header.write_envi_header("out_file.bil.hdr",
                              in_data.get_hdr_dict())

# Close files
out_file.close()
in_data = None

A more advanced example is applying a Savitzky-Golay filter to each pixel. As the filter requires every band for each pixel it is efficient to work with BIL files.

For developing algorithms using spatial data, in particular multiple files it is recommended to convert the files to another band-sequential format using the ‘gdal_translate’ so they can be read using programs which use GDAL, for example RIOS or RSGISLib.

To convert files from ENVI BIL to ENVI BSQ and copy all header attributes the ‘envi_header’ module can be used after converting the interleave with GDAL. For example:

import os
import subprocess
from arsf_envi_reader import envi_header

# Set input and output image (could get with argparse)
input_image = "input.bil"
output_image = "output_bsq.bsq"

# Get interleave from file extension
output_interleave = os.path.splitext(output_image)[-1].lstrip(".")

# 1. Convert interleave
print("Converting interleave to {}".format(output_interleave.upper()))
gdal_translate_cmd = ["gdal_translate",
                      "-of", "ENVI",
                      "-co", "INTERLEAVE={}".format(output_interleave.upper())]
gdal_translate_cmd.extend([input_image, output_image])
subprocess.call(gdal_translate_cmd)

# 2. Copy header (GDAL doesn't copy all items)
# Find header files
input_header = envi_header.find_hdr_file(input_image)
output_header = envi_header.find_hdr_file(output_image)

# Read input header to dictionary
input_header_dict = envi_header.read_hdr_file(input_header)

# Change interleave
input_header_dict["interleave"] = output_interleave.upper()

# Write header (replace one generated by GDAL)
envi_header.write_envi_header(output_header, input_header_dict)

A script to batch download files using PycURL

When downloading a lot of large files (e.g., remote sensing data), it is difficult to do using a browser as you don’t want to set all the files downloading at once and sitting round waiting for one download to finish and another to start is tedious. Also you might want to download files to somewhere other than your ‘Downloads’ folder.

To make downloading files easier you can use the CURLDownloadFileList.py available on Bitbucket.

The script uses PycURL, which is available to install through conda using:

conda install pycurl

It takes a text file with a list of files to download as input, if you were downloading files using a browser you can right click on the line and select ‘Copy Link Location’ (the exact text will vary depending on the browser you are using) instead of downloading the file. Some files follow a logical pattern so you could use this to get a couple of links and then just copy and paste changing as required (e.g., the number of the file, location, date etc.,).

Once you have the list of files to download the script is run using:

python CURLDownloadFileList.py \
     --filelist ~/Desktop/JAXAFileNamesCut.txt \
     --failslist ~/Desktop/JAXAFileNamesFails.txt \
     --outputpath ~/Desktop/JAXA_2010_PALSAR/ 

Any downloads which fail will be added to the file specified by the ‘–failslist’ argument.

By default the script will check if a file has already been downloaded and won’t download it again, you can skip this check using ‘–nofilecheck’. It is also possible to set time to pause between downloads with ‘–pause’, to avoid rejection from a server when doing big downloads. For all the available options run:

python CURLDownloadFileList.py  --help

As it is running from the command line, you can set it running on one machine (e.g., a desktop at the office) and check on the progress remotely (e.g., from your laptop at home) using ssh. So the script keeps running when you you close the session you can run within GNU Screen, which is installed by default on OS X. To start it type:

screen

then type ctrl+a ctrl+d to detach the session. You can reattach using:

screen -R

to check the progress of your downloads.
Alternatively you can use tmux. However this isn’t available by default.

Reading MATLAB files with Python

If you need to read MATLAB (.mat) data files, there is a function within scipy.io which allows you to do this. Usage is simple and well explained in the tutorial:

  1. Import file:
  2. from scipy import io
    
    inMATFile = 'ssurgo_data.mat'
    soildata = io.loadmat(inMATFile)
    
  3. Get a list of keys:
  4. soildata.keys()
    
  5. Extract data to a NumPy array:
  6. soildata_varname = soildata['varname']
    

Reading tabular data in a non-standard format

This post was going to be called ‘faffing around with data’, which might summarise many peoples feelings towards dealing with data that isn’t in the format they need it to be in. An earlier post described adding an ENVI header to binary files so they could be read into GDAL. This deals post details using Python to read tabular data stored in a text file, although many of the functions would apply to extracting any data from a text file.

Built in functions

Before setting out it’s a good idea to confirm the data can’t be read with any of the functions available in Python. If it opens in Excel, it’s likely these will work and they may be an easier option (no point reinventing the wheel!)

The CSV module in the standard Python library is really easy to use and can handle various delimiters:

import csv

# Open file
f = open('soil_moisture.csv', 'rU')
csvfile = csv.reader(f, delimiter=',')

# Skip first line (header)
headerline = next(csvfile)

# Iterate through file
for line in csvfile:
 print('First col = ',line[0])

f.close()

In numpy the genfromtxt function can be used to read a file directly to a numpy array.

smdata = np.genfromtxt('soil_moisture.csv', delimiter=',',skip_header=1)

Depending on how many files you have to process, a bit of clever use of find and replace in a text editor or using the sed command might allow you to get your data into a format that can be read by the built in functions. If this does end up being too messy it will at least provide some insight into the steps needed to write a script to read the data.

Writing a Custom Reader

The basic structure of a custom reader is to iterate through each line in the file and split it into columns.


# Open file
infile = open('soil_moisture.csv', 'rU')

# Read header to separate variable (repeat for multiple lines)
header1 = next(file)

# Iterate through file
for line in infile:
    # Count delimiters (:)
    ncolon = line.count(':')
    # Check we have 3 (4 columns)
    if ncolon == 3:
        # Seperate
        elements = line.split(':')
        print('First col = ', elements[0])
infile.close()

If the data is fixed width and separated by multiple spaces, I use the re module in Python to replace consecutive spaces with another character.

import re
line = 'tonzi  2013   12  18 1  1 9  1   20   15.69'
linesub = re.sub('\s+',':',line)
elements = linesub.split(':')

Where ‘\s+’ is a regular expression for one or more spaces. When choosing the character to use as a replacement, make sure it’s not used anywhere else in the file.

Time Data

Time can be stored in a range of different formats. Python offers ways of storing and manipulating time and data data as part of the time module. One of the easiest ways of getting time from a text file and into the Python representation is to use the ‘struct_time’ function. If each component of the date (year, month etc.,) is stored as a separate column you can use the following:

import time
year = int('20' + line[1])
month = int(line[2])
day = int(line[3])
hour = int(line[4])
minute = int(line[5])
second = int(line[6])

mTimeTS = time.struct_time((year,month,day,hour,minute,second,0,0,0))

If the date is stored in a single column the ‘strptime’ function can be used to convert to a Python time structure. For example:

import time
timeStr = '2014-07-10 08:14:01'
mTimeTS = time.strptime(timeStr,'%Y-%m-%d %H:%M:%S')

Writing data out

The CSV library, mentioned earlier, can also be used to write formatted data out to a CSV file.

outfile = open('outdata.csv', 'w')

outcsv = csv.writer(outfile)

# Write header row
outcsv.writerow(['x','y'])

# Itterate through input file
for line in infile:
    # Read and split data into 'elements'
    # as descriped earlier
    
    # Write first two columns out to CSV
    outcsv.writerow([elements[0],elements[1]])

outfile.close()

View the first lines in a text file

A really simple, but useful, UNIX command is head, which will display the first 5 lines in a text file, great for quickly checking outputs. You can also use the n flag to specify the number of line.

# View first 5 lines
head bigtextfile.csv

# View first 10 lines
head -n 10 bigtextfile.csv

There is also a coresponding tail command to view the last lines.

# View first 5 lines
tail bigtextfile.csv

# View first 10 lines
tail -n 10 bigtextfile.csv

Join two CSV files based on a common field

Sometimes it’s necessary to join two CSV files together using a common field. I wrote a python script a while ago to perform this task using the CSV python library (JoinTablesCSV.py). The script is run from the command line and takes the input and output CSV files and the numbers of the column to match (starting at 0).

For example to join attributes from inMatchFileName.csv to inRefFileName.csv, by matching the first column you can use:

python JoinTables.py inRefFileName.csv \
    inMatchFileName.csv outFileName.csv 0 0

Note this assumes there is only one record for each ID, if there are multiple (as in a relational database) this simple aproach won’t work and actually loading the data into a relational database would be better. My recomendation for this is SQLite, a self-contained database (i.e., stored in a single file and doesn’t require setting up a server), that can be accessed from Python and R.

Add ENVI header to binary files

Quite often remote sensing data is supplied as a binary file with the dimensions and geographical information in a separate text file. If you can add the relevant information into an ENVI format header file you can easily read the data using GDAL. If you have a lot of files in this format it’s not too tricky to make a python script which will pull information from the supplied text file and populate the relevant fields in an ENVI header file.

  1. If you have ENVI the easiest thing to do is open one of the files, fill in the parameters in the dialogue and then use the header file (‘*.hdr’) generated as a template. This might require playing around with parameters (data types, byte order etc.,) if not all the information you need is with your data. Make sure the data displays OK and it’s in the correct geographic location.
  2. If you don’t have ENVI you can manually generate the file using template below as a guide or if you have another file which is similar (same projection etc.,) translate to ENVI format using gdal:
    gdal_translate -of ENVI infile.kea outfile.env
    

    Then use the header file you have just created as a template. It needs to have the same filename as the data with the extension hdr . A list of the data types the ‘data type’ number in the header corresponds to is available on the IDL website. You may need to experiment with the parameters a little, you can use TuiView to open the data and check everything looks OK.

  3. Once you have a template you write a python script, similar to that below, to create a header replacing the parameters (size, upper left coordinate) as needed.

headerParameters = {}
headerParameters['fileName'] = 'image'
headerParameters['samples'] = '500'
headerParameters['lines'] = '150'
headerParameters['ULlon'] = '147.39'
headerParameters['ULlat'] = '-25.79'
headerParameters['pixelSize'] = '9.9e-06'

headerText = '''ENVI
description = {{{fileName}}}
samples = {samples}
lines = {lines}
bands = 1
header offset = 0
data type = 4
interleave = bsq
sensor type = Unknown
byte order = 0
map info = {{Geographic Lat/Lon, 1.000, 1.000, {ULlon}, {ULlat}, {pixelSize}, {pixelSize}, WGS-84, units=Degrees}}
wavelength units = Unknown'''.format(**headerParameters)

headerFile = open('image.hdr','w')
headerFile.write(headerText)
headerFile.close()

  1. To create header files for all data in directory you need to loop through the data, and read parameters from the corresponding text file. There are lots of string functions build into python, the split function is particularly handy. Assuming a text file of the form:
    width:500
    height:150
    

    You could extract the parameters using:

    textFile = open('metadata.txt','rU')
    
    for line in textFile:
        elements = eachLine.split(':')
        elements[0] = elements[0].strip()
        elements[1] = elements[1].strip()
    
        if(elements[0] == 'width'):
            headerParameters['samples'] = elements[1]
        elif(elements[0] == 'height'):
            headerParameters['lines'] = elements[1]
    

There are a couple of examples of scripts to do this on our bitbucket repository:

  • UnTarHeaderALOS.py – For JAXA ALOS PALSAR tiles, untars, creates header and optionally creates KMZ, using RSGISLib to stack bands and stretch.
  • GenerateENVIHeader.py – Generate ENVI header for files produced by GAMMA. Note for GAMMA the upper left coordinate is the centre of the first pixel (1.5, 1.5 in map info).

Quick look image of a multi-image mosaic

For large mosaics it’s common for images to be delivered / processed in smaller tiles. Creating a quick look is a good way of visually checking the data and making sure there are no missing tiles. The combination of the GDAL Virtual Raster Format (VRT; http://www.gdal.org/gdal_vrttut.html), a text file containing links to the individual tiles and the -outsize option in GDAL translate provides a quick way of producing a mosaic. For recursively finding files the find command is used.

Therefore, to create the mosaic a total of three commands are used:

# 1) Recursively find files with the extension tif
# (you could use ls if there are in one directory)

find ./ -name '.*tif' > filelist.txt

# 2) Create a GDAL virtual raster
gdalbuildvrt -input_file_list filelist.txt virtualmosaic.vrt

# 2) Create a PNG 10  % of the size, scaled from 0 - 255
gdal_translate -of PNG -ot Byte -scale -outsize 10% 10% \
virtualmosaic.vrt quicklookmosaic.png

How successfully the pixel values scale will depend on the range of values in your input image. It your image looks odd try leaving of the scale option and setting the data type (-ot) to the same as the input image. You can then open with your normal viewer for remote sensing data and stretch the image in that (if you need a remote sensing data viewer TuiView is our recommendation).

If you remove the -outsize flag, gdal_translate can also be used to create the full size mosaic. As virtual rasters are handled like standard datasets in anything that uses GDAL you can use them as part of your processing chain, like you would a real dataset.