Tales of ice and stone
Tuesday, May 19, 2015
An open source geoscientific software environment: install notes
***********************************************************
Dear visitor: since I found myself updating this post too
often, I decided to make it a permanent page. I leave this
here, but the most updated notes are found here.
***********************************************************
These are my notes for installing from source the standard software tools I use most often. The instructions are given in order to satisfy dependencies. I use openSUSE (currently 13.1) on a AMD 64-bit machine. USE AT OWN RISK and enjoy!
# prerequisites you will need:
gcc, gcc++, gcc-fortran, make, hdf5, libtiff, gd, cairo, freetype, ncurses
open a terminal
su root # change to root user
cd /usr/local/src # change to a suitable directory
# Then, for every package "foo" download the tarball (foo.tar.gz) from each site using :
wget foo.tar.gz
# then uncompress and untar:
tar -xvf foo.tar.gz
# and change directory
cd foo/
# then, for each package, follow the instructions below
-------------------------------------------------------------------------------
NETCDF (http://www.unidata.ucar.edu/software/netcdf/)
./configure CC=gcc FC=gfortran F90=gfortran F77=gfortran CPPFLAGS=-DpgiFortran --with-pic
# convenient to install F90 support
# --with-pic is mandatory in 64 bit systems (GDAL compatibility)
make
make check
make install
# add a line to /etc/bash.bashrc.local: export NETCDFHOME=/usr/local
-------------------------------------------------------------------------------
GEOS (http://trac.osgeo.org/geos/)
./configure
make
make install
-------------------------------------------------------------------------------
PROJ (http://trac.osgeo.org/proj/)
./configure --without-jni
make
make install
# Perl module: Geo::Proj4
-------------------------------------------------------------------------------
LIBGEOTIFF (ftp://ftp.remotesensing.org/pub/geotiff/libgeotiff/)
./configure
make
make install
-------------------------------------------------------------------------------
FFTW (http://www.fftw.org/)
./configure --with-pic
make
make install
-------------------------------------------------------------------------------
ImageMagick (http://www.imagemagick.org/script/index.php)
./configure --with-perl --with-fftw # FFTW must be compiled with -with-pic
make
make check
make install
-------------------------------------------------------------------------------
GNUPLOT (http://www.gnuplot.info/)
# gnuplot is having troubles with wxWidget v 2.9, use 2.8 instead
./configure
make
make check
make install
-------------------------------------------------------------------------------
GSL - GNU Scientific Library (http://www.gnu.org/software/gsl/)
./configure
make
make check
make install
# get also the Perl module: Math:GSL
-------------------------------------------------------------------------------
GMT (http://gmt.soest.hawaii.edu)
# these are steps for version 5
# adapted from the official instructions (http://gmt.soest.hawaii.edu/projects/gmt/wiki/BuildingGMT)
- get the GMT tree
svn checkout svn://gmtserver.soest.hawaii.edu/gmt5/trunk gmt-dev
- get and untar GSHHG coastlines (ftp://ftp.soest.hawaii.edu/gshhg)
- get and untar country polygons (ftp://ftp.soest.hawaii.edu/dcw)
- copy cmake/ConfigUserTemplate.cmake to cmake/ConfigUser.cmake
- edit the file:
# Set path to GSHHG Shoreline Database [auto]:
set (GSHHG_ROOT "/usr/local/src/gshhg-gmt-nc4-2.2.4")
# Copy GSHHG files to $/coast [FALSE]:
set (COPY_GSHHG TRUE)
# Set path to DCW Digital Chart of the World for GMT [auto]:
set (DCW_ROOT "/usr/local/src/dcw-gmt-1.1.0")
# Copy DCW files to $/dcw [FALSE]:
set (COPY_DCW TRUE)
Build and install GMT:
cd gmt-dev
mkdir build
cd build
cmake ..
make
make install
-------------------------------------------------------------------------------
PostgreSQL (http://www.postgresql.org)
./configure --with-perl --with-python
gmake
useradd postgres -p postgres # simple password: postgres
# make sure that the home directory for user
# postgres is where you put the data
chown -R postgres /usr/local/src/postgresql-X.X.X
su postgres
gmake check
exit (su root)
gmake install
add to /etc/bash.bashrc.local export PATH=$PATH:/usr/local/pgsql/bin
add to /etc/bash.bashrc.local export MANPATH=$MANPATH:/usr/local/pgsql/man
add to /etc/ld.so.conf /usr/local/pgsql/lib
run /sbin/ldconfig
mkdir /where/your/pgdata/istobelocated # this should be the default directory
chown postgres /where/your/pgdata/istobelocated
su posgtres
initdb -D /where/your/pgdata/istobelocated
postmaster -D /where/your/pgdata/istobelocated
su root
cp /contrib/start-scripts/linux /etc/init.d/postgresql
make /etc/init.d/postgresql executable
activate postgresql in runtime levels 3 and 5
leave configured and built in order to follow with PostGIS
# you might want to keep the configured /src directory to eventually uninstall later
-------------------------------------------------------------------------------
PostGIS (http://www.postgis.org)
mv postgis-X.X.X/ postgresql-X.X.X/contrib/postgis-X.X.X/
./configure
make # if there were changes in fundamental libraries
make install # like bison, you must do "make" in PostgreSQL
su postgres
createdb test
# createlang plpgsql test
psql -f postgis/postgis.sql -d test
psql -f spatial_ref_sys.sql -d test
-------------------------------------------------------------------------------
Spatialite (http://www.gaia-gis.it/gaia-sins/)
# requires SQLite
# begin with freexl (https://www.gaia-gis.it/fossil/freexl/index)
./configure
make
make install
# then continue with
# libspatialite (https://www.gaia-gis.it/fossil/libspatialite/index)
# it is important to install and enable libxml2:
./configure --enable-libxml2
make
make install
# and then
# readosm (https://www.gaia-gis.it/fossil/readosm/index)
# spatialite-tools (https://www.gaia-gis.it/fossil/spatialite-tools/index)
# all with:
./configure
make
make install
# then, to install the handy spatialite_gui and the minimalistic spatialite_gis, install
# libgaiagraphics (https://www.gaia-gis.it/fossil/spatialite-tools/index)
# spatilite_gui (https://www.gaia-gis.it/fossil/spatialite_gui/index)
# librasterlite (https://www.gaia-gis.it/fossil/librasterlite/index)
# spatialite_gis (https://www.gaia-gis.it/fossil/spatialite_gis/index)
# for all these:
./configure
make
make install
# NOTE: libHaru (http://libharu.org/) is an indispensable requisite for spatialite_gis. I found it hard to install via the command line, so I got through the openSUSE Application:Geo repository.
-------------------------------------------------------------------------------
GDAL (http://www.gdal.org)
./configure --with-netcdf=/usr/local --with-png=/usr/lib --with-libtiff=internal --with-geotiff=/usr/local --with-jpeg=/usr/lib --with-geos=/usr/local/bin/geos-config --with-perl --with-python --with-odbc --without-libtool --with-spatialite=/usr/local
make
make install
# Note: The perl module Geo::GDAL gets installed in subdirectories of /usr/local/lib/perl5 ...
# just the directory to your working perl path (@INC) using
# use lib '/usr/local/lib/perl5 ...'
-------------------------------------------------------------------------------
QGIS (http://qgis.org)
Check that you also have
- QWT
change directory to /usr/local/qgis-X.X.X/
make new directory mkdir /build
change directory cd /build
ccmake ..
make
make install
-------------------------------------------------------------------------------
FWTOOLS (http://www.fwtools.maptools.org/)
wget http://home.gdal.org/fwtools/FWTools-linux-2.0.6.tar.gz
tar -xvf FWTools-linux-2.0.6.tar.gz
mv FWTools-linux-2.0.6/ /usr/local/
execute the install script
add to /etc/bash.bashrc.local export PATH=$PATH:/usr/local/FWTools-X.X.X/bin-safe # so it does not mess the main GDAL
-------------------------------------------------------------------------------
GPSBABEL (http://www.gpsbabel.org/)
./configure
make
make install
# Still needs to be investigated how to make it work with Garmin USB
# when one is not root
# According to the website, one should do:
# add garmin_gps to /etc/modprobe.conf.local
# create file /etc/udev/rules.d/51-garmin.rules
# with a line SYSFS{idVendor{=="091e", SYSFS{idProduct}=="0003",MODE="0666"}
# but does not work.
# I normally execute gpsbabel as root and previously disable the garmin_gps
# kernel module
Posted by HernĂ¡n De Angelis at Thursday, August 22, 2013
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: FOSS, geoscience, GIS, open culture, open science, remote sensing, science, software
2013-08-17
Welcome!
Welcome to Tales of ice and stone!
Here you will find (mostly free) resources for learning glaciology, remote sensing and geoscientific data analysis.
From time to time, I will post articles on science and science education.
Enjoy!
Towards an open scientific publishing system for the digital era
The current system of scientific publishing has existed for nearly 350 years. It is based on peer-review and publishing in scholarly journals and books. The purpose of peer-review is to ensure minimum levels of scientific quality and relevance, whereas publishing companies bear the costs, normally recovered by means of subscriptions and book sales. This system has worked well, implementing rigorous examination of scientific work and fueling scientific and economic development through the efficient dissemination of results. It is unfortunately far from perfect. I won't repeat all criticisms here, more thorough analysis can be found almost everywhere in the internet as, for example in Donald Forsdyke's page and this article by Larry Wasserman. For the sake of this post, suffice it to say that there is concern among many scholars that the ills of the system have largely outweighed the benefits.
One particular criticism is the case when journals abuse their position as the bearers of scientific message. Established journals may sometimes fall in the hands of large publishing companies that demand exorbitant sums of money for the subscriptions. The severity of this problem can be better understood when one thinks that peer-review is an unpaid service that every scientist does to the scientific community. Also note that in the case of books, authors usually get very little pay in comparison with the cost of each book. In every case, most of the money usually goes to the publishing companies. The cherry of the cake is that these companies usually demand the transfer of copyright from the authors!
As a result of this system, many of the important journals in most fields are unaffordable for the general public and scholars not associated to large libraries at universities and research centers. In particular, libraries in poorer regions and countries can seldom afford journals and books, providing an evil positive feedback between poverty and ignorance. Another not unusual irony is that these subscriptions are paid with tax money, but the average person on the street cannot access the scientific literature she or he helped to pay! It is easy to see that this situation contributes to make the learned discourse elitist and hinders the development of society, in another ironic joke of history, as this was the prevalent situation at the start of the great scientific revolution after the Renaissance, which the practice of publishing and dissemination helped to eradicate.
The old system of scientific publishing has served science well for a long time, but has now become outdated and possibly irrelevant in the modern era. Fortunately, a lot has happened in science publishing since the times of Galileo and Halley. Besides digital media and instantaneous worldwide information flows, we also have the concept of open science, a movement encouraging a more open scientific culture, in which there is a transparent development, dissemination, discussion and use of scientific results by society.
In line with this philosophy, why not try to think what is needed for a publishing system in which:
- anyone can submit a piece of scientific work,
- anyone can access it,
- anyone can peer-review, criticize, and comment it, and
- anyone can use, modify and build upon it?
Such a system would ensure unrestricted access to scientific results by everyone, making the process of scientific discussion more transparent, and enhancing development because everyone will be allowed to see it. At the same time, quality assessment would be made more significant and reliable because peer-review would be done in a massive scale, instead for the typical two or three reviewers of the traditional system.
None of these ideas is really new and even the first and fourth are already part of the traditional system. Open Access journals implement the second, and in a few cases also the third (as The Cryosphere). ArXiv and Google Schoolar, among others, provide examples of how parts of this can be implemented technically.
In fact, the requirements that such a system should pose are fairly minimal for today's technology:
- at least one repository for the material, akin ArXiv;
- user registration and login, with a unique identification, for submitting, reviewing, voting, commenting and downloading (enforces a minimum of etiquette, common to a vast repertoire of discussion forums);
- some minimal layout, like the classical title, author(s) name(s), abstract, results, discussion and references, to make search easier;
- unambiguous attribution of credits and rights to authors, providing some sort of legal protection like Creative Commons;
- votes restricted to one (up or down) per user, perhaps with the possibility of regret and reverting the vote (StackExchange works this way);
- and a good search engine!
Obviously, these ideas pose a lot of questions, mostly practical, but also a bit philosophical. For example:
Q: who will pay for such system to work?
A: The examples mentioned above give some good answers. ArXiv is supported by a big donation, and Google Schoolar is maintained by Google. In addition, it is not too hard to imagine that such a system would be incomparably cheaper than the enormous sum of money spent in journal subscriptions.
Q: who is going to "weed out" bad papers if there are no reviewers and editors?
A: Crowdsourcing! People reading the posted articles can comment and vote, up or down, giving grounds for evaluating the quality of the scientific contribution made. It is not difficult to imagine how this elementary procedure can provide a basis for much better measures of quality and productivity for individual researchers than the so-much enshrined metrics in vogue today. I am also tempted to answer with another question: do we really need someone else to decide what we should read? After all, we are adults and, hopefully, as educated people, it is up to us to decide what is relevant and good for us.
Q: there will be too many papers around!
A: Yes, but in this case this is a good thing. Let's not forget that there are already many papers around right now and it is often difficult to find everything that is relevant. We only need good search engines! An open peer-review system, with voting and commenting provides convenient measures to select what you want to read. One could, for example, choose the most up-voted or, better still, the most commented papers in a given field. Compare this idea with with the current practice of selecting what to read among the plethora of papers around.
Q: who would like to submit a paper to such an open repository, that has no "status" or "recognition" within my field?
A: Me! and surely thousands of other scholars that already use ArXiv. Metrics such as "impact factor", which many people consider relevant, are nothing but the statistics of a social phenomenon within academia, namely that of attributing importance to a handful of journals, leading to many others to feel compelled to publish there. I do not see any problem in people being compelled to publish in a place with better philosophical grounds. It is just a matter of critical mass. Plos showed this is possible.
I am confident that a system working along the lines described above would contribute to solve many of the ills of the current system, while making substantial progress towards a more open, democratic and free access to scientific results. In addition, problems like merit and precedence attribution would be more transparently handled. As Michael Nielsen wrote in his book, perhaps the hardest difficulties in starting to move in this direction are not technological, but cultural. In particular, fear, distrust, conservatism and other old cultural barriers amongst academicians are particularly hard to remove. The example of the ArXiv experience shows, nevertheless, that this is possible.
NOTE: This post contains some of my meditations on this subject, but I make no claim to originality. I have been reading and thinking a lot about this for years, so I am very probably making a cocktail of ideas, most of which are almost certainly not mine. Please feel free to comment if I am missing an important citation!
Working with MODIS L1B from scratch. 1. getting the data
The Moderate Resolution Imaging Spectrometer (MODIS), onboard both Aqua and Terra platforms, is among the best instruments at the geoscientist's disposal. It not only has the possibly best calibration available in civilian platforms and a revisit time of 1/2 days, but the data is also accessible free of charge. In addition, since the Terra and Aqua platforms are designed to fly over any region at mid-morning and early afternoon, respectively, it is possible to get images from the same portion of Earth at two times of the same day. MODIS has 36 bands of varying widths spanning from visible to the thermal infrared parts of the spectrum, with ground sample sizes between 250 and 1000 m. The instrument is specially suited for regional to global applications, or for very large systems.
There are several excellent MODIS products, generated using state-of-the-art algorithms written by expert scientists, which are suited for many different purposes. For many users, these products will be just fine. However, if what we need is being able to pick up individual images, at daily frequency, and not compilation over several days (as these products often are), we need to work with Level 1 images. This is the first of a handful of posts in which I describe how to work with MODIS Level 1B images from scratch. I hope it is as useful for others as it is for me!
Working with MODIS L1B from scratch. 2. subsetting and reprojecting
In the second part of this series, we will see how to extract subsets of the hdf files we downloaded and reproject them. We will use the programs "dumpmeta" and "swath2grid", that are part of the MODIS Reprojection Tool Swath suite. These can be downloaded from the LP DAAC site. The package is free of cost, but you will have to register to gain access to it. Instructions for installing it come with the package. [In Linux this is is just a matter of copying the uncompressed tree into a suitable directory, like /usr/local/MRTSwath, and then adding the 'bin' subfolder to the $PATH]
All procedures that are shown here can also be made with the "gdalinfo" and "gdalwarp" tools that are part of the GDAL suite. (I will add an update on this later!)
- We begin with the two hdf files downloaded in the previous step:
MOD021KM.A2013187.1035.005.2013187194950.hdf
MOD03.A2013187.1035.005.2013187174113.hdf
The first one is the image data, while the second includes the ancillary information on georeference, sun position, etc.
- We then start by exploring the contents of our file using the program "dumpmeta":
dumpmeta MOD021KM.A2013187.1035.005.2013187194950.hdf metadata.dat
We open the output file in a text editor and look for the naming of the data granules. Under the heading "GROUP=DataField" we will find the names. In this example we look for bands acquired in the visible and mid-infrared regions of the spectrum: bands 1-7. These bands are originally acquired using 250 m (bands 1 and 2, red and near-infrared) and 500 m pixel size (bands 3-7, from the blue to the mid-infrared). In the 1KM product we downloaded, these bands have been resampled. The names look like this: "EV_250_Aggr1km_RefSB" and "EV_500_Aggr1km_RefSB", that is, 250m and 500m that have been aggregated into 1km bands.
In the metadata file this are describes as:
OBJECT=DataField_5
DataFieldName="EV_250_Aggr1km_RefSB"
DataType=DFNT_UINT16
DimList=("Band_250M","10*nscans","Max_EV_frames")
END_OBJECT=DataField_5
OBJECT=DataField_8
DataFieldName="EV_500_Aggr1km_RefSB"
DataType=DFNT_UINT16
DimList=("Band_500M","10*nscans","Max_EV_frames")
END_OBJECT=DataField_8
- We then select the coordinates of the upper left and lower right corner we want to cut from the image. In our case, we will use:
65.00N 8.00E
55.00N 22.00E
but we want them reprojected in a suitable UTM projection, in our case UTM33, so using the coordinates are approximately:
65.00N 8.00E 170432.16 7226731.39 0.00
55.00N 22.00E 947397.04 6117225.18 0.00
(This was done using the program "cs2cs" that comes with the package proj4:
cs2cs +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +to +proj=utm +zone=33 +ellps=WGS84 +datum=WGS84 +units=m +no_defs -r -E
65.00N 8.00E
55.00N 22.00E
)
- Now we have all elements we need to extract a subset. We put all relevant information in a parameter file "parameter.prm". The contents of this file are as follows:
INPUT_FILENAME = MOD021KM.A2013187.1035.005.2013187194950.hdf
OUTPUT_FILENAME = sweden
OUTPUT_FILE_FORMAT = GEOTIFF_FMT
GEOLOCATION_FILENAME = MOD03.A2013187.1035.005.2013187174113.hdf
INPUT_SDS_NAME = EV_250_Aggr1km_RefSB; EV_500_Aggr1km_RefSB
KERNEL_TYPE = NN
OUTPUT_PROJECTION_NUMBER = UTM
OUTPUT_PROJECTION_PARAMETER = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OUTPUT_SPACE_UPPER_LEFT_CORNER = 170000 7230000
OUTPUT_SPACE_LOWER_RIGHT_CORNER = 950000 6120000
OUTPUT_SPATIAL_SUBSET_TYPE = PROJ_COORDS
OUTPUT_PROJECTION_SPHERE = 8
OUTPUT_PROJECTION_ZONE = 33
OUTPUT_PIXEL_SIZE = 1000
In order, these lines specify the input image filename, some generic output name, the output file format, the geolocation file name, the data granules to be exported, the resampling method (here nearest neighbours), the cartographic projection, a list of parameters related to the projection, the coordinates of the UL corner, same with the LR corner, a label specifying that we have projected coordinates, a projection sphere (8 = GRS80/WGS84), an UTM zone and the output pixel size in meters.
- We then execute the command:
swath2grid -pf=parameter.prm
which dumps the following output in the screen:
******************************************************************************
MODIS Reprojection Tool Swath (v2.2 September 2010)
Start Time: Fri Nov 15 17:28:28 2013
------------------------------------------------------------------
debug 1
debug 2
General processing info
-----------------------
input_filename: MOD021KM.A2013187.1035.005.2013187194950.hdf
geoloc_filename: MOD03.A2013187.1035.005.2013187174113.hdf
output_filename: sweden
output_filetype: GEOTIFF
output_projection_type: Universal Transverse Mercator (UTM)
output_zone_code: 33
output_ellipsoid: GRS 1980/WGS 84
output_datum: WGS84
resampling_type: NN
output projection parameters: 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SDS name #bands in SDS #bands to process
1) EV_250_Aggr1km_RefSB 2 2
2) EV_500_Aggr1km_RefSB 5 5
Processing EV_250_Aggr1km_RefSB, 0 ...
output lines/samples: 1110 780
output pixel size: 1000.0000
output data type: same as input
output upper left corner: lat 65.02868042 long 7.98322664
output lower right corner: lat 55.03222632 long 22.03079051
input lines/samples: 2030 1354
input resolution: 1 km
%% complete: 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
... and so on until is finished with all 7 bands.
- We finally have 7 new image files in our folder:
sweden_EV_250_Aggr1km_RefSB_b0.tif
sweden_EV_250_Aggr1km_RefSB_b1.tif
sweden_EV_500_Aggr1km_RefSB_b0.tif
sweden_EV_500_Aggr1km_RefSB_b1.tif
sweden_EV_500_Aggr1km_RefSB_b2.tif
sweden_EV_500_Aggr1km_RefSB_b3.tif
sweden_EV_500_Aggr1km_RefSB_b4.tif
These files contain the first 7 bands of the reprojected subset of the image data we downloaded. The image type is 16 bit unsigned integer. The naming b0, b1, etc is related to the data granules. In the 250_Aggr1km case, b0 and b1 mean bands 1 (red) and 2 (near infrared). The band specifications can be found here.
We can combine the images sweden_EV_250_Aggr1km_RefSB_b0.tif, sweden_EV_500_Aggr1km_RefSB_b1.tif and sweden_EV_500_Aggr1km_RefSB_b0.tif in RGB channels to obtain a true color composite. We do this using the program "gdal_merge.py" that comes with GDAL:
gdal_merge.py -separate -o sweden_2013187_143.tif sweden_EV_250_Aggr1km_RefSB_b0.tif sweden_EV_500_Aggr1km_RefSB_b1.tif sweden_EV_500_Aggr1km_RefSB_b0.tif
The results look like this:
The image in the example above has been actually reduced and enhanced to better fit into this page. This was done simply using ImageMagick:
convert sweden_EV_250_Aggr1km_RefSB_b0.tif sweden_EV_500_Aggr1km_RefSB_b1.tif sweden_EV_500_Aggr1km_RefSB_b0.tif -channel RGB -combine -geometry 25% -alpha off -equalize sweden_2013187_143.png
Of course, all these procedures are very tedious to do by hand and should be automatized for large scale projects in which hundreds of images are involved. In such a case, all of the above must be embedded in a script, using for example Perl or Python.
In the next post, we will see how to convert these images to radiance at the top of the atmosphere.
Working with MODIS L1B from scratch. 3. calibration, conversion to radiance at top-of-atmosphere
In the third part of this series we will calibrate the images to convert the pixel values to spectral radiance.
The spectral radiance is a physical quantity expressing the amount of energy being radiated (in our case, back to space), and is measured in Watts (Joules per second, energy flux) per square meter (area) per micrometer (wavelength) per steradian (solid angle).
The values that we will obtain here will not yet be corrected for illumination, topographic or atmospheric effects, and therefore correspond to what is measured by the satellite at the altitude of its orbit, as opposed to values on the ground that you can measure with a field spectrometer. The values we will obtain here are thus called "radiance at top-of-atmosphere", usually abbreviated "TOA".
The conversion is based on a linear relationship between data counts (pixel values) and the radiance measured by the sensor (for the gory details have a look at the manuals in NASA's MODIS Characterization Support Team). The basic relationship is:
radiance = (pixel_value - offset) x scale
and has to be performed band-wise.
We begin by extracting the scales and offset from bands 3-7, that is the 500 m bands (remind that we use the bands resampled to 1 km). We use the program read_sds_attributes that comes with the MODAT suite.
Working with MODIS L1B from scratch. 4. topographic and illumination correction
In the last post of this series we will review the necessary steps for performing topographic and illumination correction of the MODIS images. Topographic and illumination corrections are important when we need to obtain the apparent surface reflectance as an end product. This is necessary when we want to identify materials on the Earth's surface by deriving empirical spectral signatures, or to compare images taken at different times with different Sun and satellite positions and angles. By applying the corrections described here, we transform the satellite-derived reflectance into values that are (almost) directly comparable with those we would obtain with a field spectrometer. I wrote "almost" because we are not yet considering atmospheric correction, and also for two further assumptions we will make below. Also note that topographic correction just compensates for differences in measured reflectance due to terrain, as if they were lying flat and measured with a nadir-looking instruments, but cannot recreate values for areas behind obstructions.
We will need the calibrated images (radiance top-of-atmosphere, TOA, see previous posts); information about the sun elevation (Sun Zenith) and direction (Sun Azimuth), both from the MOD03 metadata file; information on the solar irradiance on each band, from each MOD02 hdf file; and two auxiliary grids, covering exactly the same region as the images, and containing the slope and aspect maps of the region of interest. The last two grids are derivable from a digital elevation model using standard GIS procedures (see, for example, GRASS).
The basic relation we will consider here is:
For the sake of this example, we assume that the materials on the surface are Lambertian, that is, they are perfectly diffuse isotropic reflectors, spreading the energy homogeneously in all directions. Admittedly, this is not true for many important natural surfaces, but works fairly OK as a first approximation, and allows us to compensate for illumination effects by simply dividing the observed radiance by the cosine of the sensor zenith angle. Think that more realistic assumptions about the surface materials invariably involve very complicated additional calculations which, more often than not, rapidly push the problem to the edge of tractability. Here we will take the pragmatic approach of proposing a "good-enough" solution.
In this post we will use the following MODIS band 2 image of the Southern Patagonia Icefield as an example:
The image covers approximately 395 km in the North-South direction, and 119 km in the East-West direction.
We will need to extract the information on solar zenith and azimuth, as well as sensor zenith from the MOD03 files. Note that, as MODIS images cover a large geographical area, these angles cannot be considered constant over a scene, as we will sometimes assume for Landsat, for example. Therefore it is important to extract these values pixel-wise, as grids from the metadata file. We begin by reading the image and figuring out what is offered there:
read_sds_attributes -sds='SolarZenith,SolarAzimuth,SensorZenith' MOD03....
and we get:
Subscribe to:
Posts (Atom)