home products news downloads documentation support gallery online maps resellers search
TNTmips

HOME

PROFESSIONAL
  TNTmips
  TNTedit
  TNTview
  TNTsdk
  Prices
  How To Order

CONTACT MI
  Resellers
  Consultants
  MicroImages
  About MI
  Visiting
  Prices
  Send Email
  Reseller Resources

SHOWROOM
  Gallery
  Technical Guides
  New Features
  Testimonials
  Reviews
  World Languages

FREE PRODUCTS
  TNTmips Free
  TNTatlas

  MI/X
  FAQ

DOCUMENTATION

SCRIPTING

SITE MAP

 

Release Notes in PDF format ...

TNT Products V6.0
December 1998

Table of Contents


Introduction

MicroImages is pleased to distribute V6.00 of the TNT products, which is the 45th release of TNTmips.  The Windows versions now incorporate many display features as DLLs, which has reduced their size by 25%.  The following major features have been added:

  • 3D Simulations:  Many enhanced and new features have been added, including smoother turns, foregrounds, and backgrounds; orbiting and panning paths; and terrain following paths.

  • Geotoolbox:  The sketch, measure, region generation, selection, and related tools have been improved and reorganized into a new compact Geotoolbox.

  • Hatch Patterns:  Hatch patterns can be designed, stored, and used for polygon fills.

  • Import/Export:  MapInfo native formats can be imported and layouts exported to Adobe PDF format.

  • Label Placement:  Attractive, label placement is supported with a new gadget that uses queries created by a wizard.

  • SML:  Rapid expansion continues with the addition of 184 new functions; introduction of 93 network analysis functions; HTML generated views; image classification functions; advanced GPS support; layout control; and easily created database record dialogs.

Four new Getting Started tutorial booklets are shipping in printed format.  Three new TNT product installation booklets are also shipping in the same format.  Many previously released Getting Started tutorial booklets have been updated.  All 45 of these booklets are included on the V6.00 CD in PDF format.  Direct access to these profusely color-illustrated booklets, totaling over 1000 pages, is now provided directly from the TNT menu bar.

A count of 221 new feature requests submitted by clients and MicroImages staff were implemented in various V6.00 processes.

Advanced Users’ Workshop

A reminder that MicroImages will host the 10th Advanced Users’ Workshop in Lincoln, Nebraska over 4 days (Tuesday through Friday), 19 to 22 January 1999.  Come and share ideas with clients and dealers from around the world.  If you plan on attending, please return the enclosed registration form.

Summary of New Features

System.

  • DLLs used in TNTmips for Windows 95/98/NT reduce installed size by ~21 MB

  • Project File limit increased to 16 terabytes for Windows 95/98/NT and DEC UNIX.

  • Objects increased to 4 terabytes for Windows 95/98/NT and DEC UNIX.

  • New icons in Object Selection dialog.

  • View detailed “info” about any object in Object Selection dialog.

  • Use “Add All” and “Remove All” for fast selection of objects.

  • Improved small TrueType fonts on all platforms by using hinting and smoothing.

  • Incorporates HTML interpreter to provide linked information screens.

  • Start Acrobat Reader and access all tutorial booklets at startup or from the menu.

TNTlite.

  • Increases raster object size to 614 by 512 cells.

  • Increases point elements in vector objects  from 500 to 1500.

Visualization.

  • Fly smoother turns in the simulator and control maximum turn rate.

  • See cursor simultaneously in all 2D views.

  • Render a layout including 3D components directly to a raster object.

  • Fine tune a previously saved color balance.

  • Make several theme maps with the same ranges for comparison.

Import/Export.

  • Import MapInfo native, internal format (both graphics and attributes).

  • Import AVIRIS and ENVI hyperspectral images and SDTS DEMs.

  • Import XYZ coordinates in text files as 3D line segments.

  • Export Arc BIL/BIP; attributes with SDTS vectors; and georeference for GEOTIFF and ERMapper.

Geotoolbox.

  • Integrates, expands, and streamlines tools into a new Geotoolbox.

  • Use selection, measurement, sketching, and region generation from a single window.

  • Select a single graphical element and use in multiple functions.

  • Quickly move between tools and reports using tabbed panels.

  • Select GPS input as a surrogate for cursor.

  • Select specific group in which to sketch.

  • Create region in a raster layer from a solid area or boundary trace.

  • Create any kind of TNT point, line, polygon, or text style in sketch tool.

  • Make and save a cross-section from a surface and a polygon layer.

  • Draw a line in a 2D view to position the cross-section.

  • Transfer polygon attributes to the cross-section vector object.

  • Available in the FREE TNTatlas.

GPS tools.

  • Integrated GPS functionality in Geotoolbox.

  • Setup input from multiple GPS devices.

  • Choose a different color cursor or TNT point style for each moving GPS position.

  • Choose another cursor or symbol for a temporally static GPS position.

  • Move any selection tool (for example, a circle) with a GPS position.

  • Log any and all GPS devices.

  • Playback a GPS log in real time.

  • Available in the FREE TNTatlas.

Hatch Pattern Editor.

  • Fill polygons with simple lines or complex TNT line styles.

  • Use a new style editor to create and edit hatch patterns.

  • Control line angle, spacing, offset, thickness, and so on.

  • Superimpose several line patterns to create complex fills.

Editing.

  • Use the Geotoolbox outlined above for efficient element selection.

  • Filter out islands below a selected size.

  • Show statistics information for all filters being tested.

  • Generate multiple line label positions with a crossing line.

Database Management.

  • Use table editor to define constraints for each field.

  • Constraints act as data filters when fields are filled in.

  • Set a field value from a multiple choice list.

  • Restrict the range of numbers allowed in a field.

  • Control upper, lower, and mixed case for string fields.

  • Restrict key fields to allow selection of field from only primary table.

  • Define how key fields should be represented in a single record view.

  • Show a scrolling list of the valid values in a field which can be selected.

Hyperspectral Analysis.

  • Compute principal components and view eigenvectors and component variance plots.

  • Use Self-organizing Map Classifier (unsupervised classification using neural network).

  • Import and export spectral curves from text files.

  • Select a wavelength range(s) to define bands used in all steps.

  • Use Hyperspectral Explorer to animate search for unique RGB displays of original or processed bands.

  • Use animated n-Dimensional Visualizer (rotating scatterplot) to define point clusters, extreme points, and relationship to position in 2D RGB display.

  • Use a variable averaging window (kernel) to extract multiple pixel spectra.

Network Analysis.

  • Assign drawing styles to network elements.

  • View all lines with their TNT line styles.

  • Show DataTips for viewing names of all lines connected to a node.

  • View labels with stop numbers.

CAD Merge.

  • Combine several CAD objects into one.

  • Use same database joining options as in Vector Merge process.

COGO.

  • Identify points with alphanumeric labels.

  • Edit with improved tools.

  • Import coordinate points from text files.

Create Geospatial Products (SML).

  • Create display layouts with multiple groups and positioning.

  • Pop up a dialog for editing a database record.

  • Turn off any unwanted icon in a 2D or 3D view.

  • Directly read and display coordinates from multiple GPS devices.

  • Test a GPS position to determine if it has changed.

  • Use HTML to design and present attractive instruction scripts.

  • Read and write georeference objects.

  • Convert rasters between color models.

  • Classify multispectral images (20 functions).

  • Perform network analyses (83 functions).

  • Script with ~170 new functions (total now ~770).

Sample Data Logger APPLIDAT.

  • Use any TNTatlas dataset and a GPS unit to collect field observations.

  • Select position on display with GPS or cursor.

  • Select table type and fill in records for each position.

  • Omit keyboard as all entries can be multiple choice.

  • Edit existing positions and table entries.

  • Adapt this sample Script to specific field objectives.

New Tutorial Booklets.

  • Network Analysis

  • Analyzing Hyperspectral Images

  • Sharing Geodata with Other Popular Products

  • Macintosh:  Installation and Setup Guide

  • Windows 3.1x, 95/98 and NT:  Installation and Setup Guide

  • Optimizing Windows 3.1x

  • Technical Characteristics

Languages.

  • Use Japanese, Chinese, and Russian for TNT interface.

  • Create reference dictionary.

  • Merge translation with new English version to highlight changes.

  • Encrypt language resource files.

  • Distribute translations via microimages.com.

Dropping Platform

V6.00 is the last release of the TNT products for the Windows 3.x operating system.  Microsoft, MicroImages, and almost all PC users in the world have now migrated through at least one new operating system (W95) and perhaps two (W98).  You now have a considerable amount of money and personal time invested in the operation of your TNT product(s).  W95 and W98 provide the basis for significantly improving the performance of these products (reliable multitasking, reliable virtual memory, faster performance, ...).  Microsoft no longer supports W3.x products, and their upgrade to W95 or W98 is of negligible cost in the overall scheme of things involved.  The time has come to move on!  “The king is dead, long live the King.”

Notice!  V6.00 is the last release for Windows 3.x.  Corrections for W3.1 will be made until the release of V6.10.

Effective with the release of V6.00, MicroImages is switching the compilation of all TNT products to C++ and making other such changes, such as the introduction of Windows DLLs (Dynamic Linked Libraries).  After these changes, it will be difficult, and soon impossible, to correct any errors for any TNT product compiled under C for W3.x.  MicroImages will set up a separate temporary W3.x compile system to provide corrections and patches for your use of V6.00 until the release of V6.10.  Patches created in this interval can be preserved and will continue to be available.  However, after that, MicroImages may not be able to continue to create any additional new corrections for V6.00 under W3.1, as this older source code falls behind.  Please make sure you report any problems with V6.00 operating under W3.x as soon as possible or make plans to convert to W95, W98, or NT.

Yes, it is a bit ironic that concurrent with V6.00, a nice booklet is provided on how to optimize W3.x for use with the TNT products.  However, this material has been provided for some time to W3.x users in a different, less formal format.

Hyper Performance

Many past MicroImages MEMOs have taken the position that the key parameter to weigh in the selection of a desktop computer for use with TNTmips was the megahertz of the processor.  Previously, this had been the best single parameter upon which to base a purchase decision for PCs, Macs, and lower-cost desktop UNIX-based machines.

The rapid introduction of very low-cost (less than $1000) PCs has led to some changes in this recommendation.  You should no longer focus only upon the megahertz of the processor.  Several factors have now altered the validity of using this single figure-of-merit for choosing a machine to run TNTmips and similar heavy-duty software products.  First, Intel and others are manufacturing processors which have the same general megahertz range but vary significantly in performance.  For example, a Pentium or clone versus Pentium Pro versus a Pentium II can all be within megahertz rating (for example, 300 to 400 MHz) but significantly different in performance, as everyone rushes to make crippled but cheap chips for the under $1000 market.  But this rush to create lower and lower priced machines results in the use of cheap, low-performance hard drives, controllers, graphics boards, and CD drives.

All these variations in components and processors require that you use considerable care at this time in selecting a new desktop machine.  Certainly, if you are using an old Pentium 60 or 90 with a 1X CD drive and can only afford a 300 MHz based price-buster model, then do so as it will still provide an improvement in performance of several times.  However, TNTmips uses all of everything on your computer and is especially sensitive to slow drive access.  A carefully selected high end desktop machine of 400 to 450 MHz, purchased from a reputable manufacturer such as Dell, Gateway, or Compaq, can provide overall performance of several times more than that of the 300 MHz price-buster.

Remember when TNTmips operations used to be primarily sensitive to the floating point arithmetic processing rate?  Subsequently, Intel significantly improved this factor in PCs with MMX and related developments for the graphics and game industries.  Now TNTmips is expected to read and write gigabytes of data at a single bound.  This requires a lot of high performance drive space and a state-of-the-art drive controller.  Optimizing the performance of any new machine’s bus (100 MHz), drive rotation (7,200 or 10,000 rpm), and controller I/O (10 megabytes per second) should all be goals in buying a new PC.  Use the read utility in TNTmips to read a large file of any type or the System Information utility in Norton Utilities to measure the overall throughput of each controller/drive combination in your current system and any potential new PC.

Caution!  It is easy to get fooled by caching when testing drive read rates or TNTmips startup times.  Processing time can look very good (or poor) depending upon how the previous use of your machine has cached all or a portion of the test files or of TNTmips.  The only sure test is to cold start your computer with TNTmips or to use the Norton Utilities which avoid the effects of caching.

The following are some of the results of reading an 80 MB file or any large MB sized file with the TNTmips read test utility at Support/Timing/Read File (using 64 KB buffer) with each test performed after a cold start.  These read rates compare closely with the results of the same tests performed on a Mac or Windows platform with the System Information utility in Norton Utilities which has a similar function and automatically uses a 64 KB buffer.  Run the Norton or MicroImages read test on the machine on your desktop with 64 KB and compare with these numbers.

200 MHz Pentium PC, W98, Fat32 with IDE (DMA) controller 4.7 MB per second
266 MHz Pentium PC, W98, Fat16 with Ultra IDE (DMA) 5.6 MB per second
200 MHz Pentium PC, W95, Fat32 with IDE (PIO) 7.6 MB per second
200 MHz Pentium PC, W98, Fat16 with IDE (without DMA) 2.8 MB per second
200 MHz Pentium PC, W98, Fat16 with SCSI 3.9 MB per second
400 MHz Pentium II, NT4.0, NTFS with IDE 6.6 MB per second
132 MHz Mac, MacOS 7.x, HFS with SCSI  5.0 MB per second
266 MHz Mac G3, MacOS 8.5, HFS+ with Ultra IDE  6.2 MB per second
300 MHz Mac G3, MacOS 8.5, HFS+ with Ultra SCSI 9.5 MB per second
233 MHz iMac G3, MacOS 8.5, HFS with Ultra SCSI 10.0 MB per second

The Apple iMac is a low-cost computer, which at $1299 has only one standard configuration and was tested here.  However, from these test results, you can see that it has the best drive and controller.  It  provides a read rate of 10 megabytes per second.  This is higher than any other computer in MicroImages’ possession, including UNIX workstations.  As a result, although it has a somewhat slower processor (at 233 MHz), it has good overall performance when used with the read/write intensive processes of TNTlite or any of the TNT professional products.  Furthermore, Apple has announced that this model iMac will be lowered in price to $999 in January when it introduces a new model of the iMac.

Priority of Features for V6.10

As usual, it is not clear if the MicroImages software engineers will get all these tasks done for V6.10.  Thus the following list represents only our current priority efforts and plans.  The designation [available now] means the feature has already been added since the V6.00 CDs were created and can be tested in beta form by downloading the process(es) involved.

System Level.

An auto-start electronic tour of the TNT products is being produced by MicroImages in PowerPoint form with slides, MPEG movies, audio, and so on.  It is directed specifically toward those who obtain TNTlite in electronic format (CDs or downloading).  It will introduce the TNTlite user to all the auxiliary electronic materials provided to assist in learning and using it (for example, tutorials, reference manual, patches, software support, upgrades, ...).  It will also provide access to materials describing and promoting the TNT professional products.

On-Line Help.  The “Quick Help” structure in the TNT products will be replaced.  The new approach will use the internal HTML interpreter first introduced in V6.00.  This will enable MicroImages’ scientific writers to easily add and expand the on-line help instead of software engineers.  You will also be able to add, via HTML, your own reference notes and instructions for each operation you have mastered.  It will also enable easier translation and its maintenance for the on-line help.

Visualization.  An alternative ArcView-like layer control panel will be added for use in simpler visualizations.  It will integrate that product’s useful automatic legend generation features.  It will be especially useful in creating products in SML.

Provide a link to the native MapInfo format (TAB: graphics and databases).

Allow text labels to occur in boxes which mask all features from all other layers (for example, mask out all lines that cross them).  Provide options to control how these label backgrounds will be displayed (for example, background color).

The display/visualization process used in all the TNT processes will be modified to isolate the graphics engine which does all the work (communication with RVC, projection changing, compositing, regions, ...) from the X/Motif based user interface calls.  This Geospatial Rendering Engine (GRE) will be the core of a TNTatlas Internet server which will accept input from JAVA applets operating in a web browser and render the requested view.  The GRE can serve as the basis for the development of geospatial products that use the standard Windows user interface.  MicroImages will also license the GRE to other software developers for use in their products.

3D Simulation.   Modifications will be made so that faster rendering can be made from objects that can be loaded into memory.  Appropriate methods for splining in X,Y, and Z will be added to assist in creating smoother paths.  Options will be provided to insert a plan view and/or a flight profile into the top/bottom of an MPEG movie.  The viewing position will move across these inserts.  Use of pyramid layers will be supported to provide for faster MPEG movie creation, even with foreground smoothing and background speckle reduction introduced in V6.00.

Legends.  Improvements are planned for the layout and presentation of legends in both the display and hardcopy formats.

Regions.  A procedure will be added to the region creation tools to use an SML script to create a region.

TNTatlas Server.  An Internet based TNTatlas server is being created as a new MicroImages product.  The first version will be quite modest in its goals.  A JAVA applet to be used in browsers to communicate with this TNTatlas server is being written.  It will create an appropriate local (browser) interface, collect client inputs (for example, select local and layers, zoom, show DataTips, ...) and send them to the TNTatlas server.  The source code for this JAVA applet will be provided to all of you for possible modification for use with the TNTatlas server product.

SAR.  A process will be added to correct slant range SAR (Synthetic Aperture RADAR) images to plan view with georeferencing.  The Jet Propulsion Laboratory has many SAR images from aircraft and spacecraft available in this form [see format support below].

Styles.  The line style editor will be improved.  A new feature will support the insertion of symbols and characters into line styles as they are rendered.

Import/Export.  All of the import and export processes are being rewritten so that each individual import or export conversion process is a function [currently underway].  This will enable all these functions to be provided for your use in SML.  Perhaps more important is that the source code will be revealed for a number of these functions for file types with formats already in the public domain.  This will provide models to those who wish to use the TNTsdk to write their own specialized import/export plug-ins for use in SML or on-line in TNTmips.

Add export from CAD and vector objects to the native MapInfo format (graphics and attributes) often referred to as TAB [currently underway].  Export nodes having attributes as point data for use in other processes such as surface fitting.

Object Editing.  Add direct editing of the native MapInfo format (TAB: graphics and databases).  Provide more capabilities to interactively edit TIN objects.  Convert selected nodes to point elements.  Step through all selected elements to identify those without attributes and allow their attribution.  A “node-turn” table (for example, right turns only) will be added for use in network analysis.

Hyperspectral Analysis.  The hypercube object has already been created, but it was omitted from V6.00 as its insertion at the end of the development cycle was risky [underway now].  The Minimum Noise Fraction transform is being added to assist in mixed pixel extraction [underway now].  An interactive window will be added to assist in selecting layers by wavelength to display, show in n-Dimensional Visualizer, and so on.  It will also show spectra, atmospheric absorption bands, ... for your reference while selecting bands.

Databases.  Faster access to individual records via ODBC will be provided.  The use of constraints to control the form and characteristics of the entry of attributes will be expanded.

SML.  Expansion of the TNT geospatial programming language will continue.  You will be able to create and control more layers in the View window:  text, map-grids, scale bars, regions, SML scripts, and so on.  Development of new suites of functions will focus on those you request and:

  • import and export of objects

  • surface modeling

  • watershed analysis modeling

  • more features for database forms via database constraints

  • conversion between 8-, 16-, 24-bit, and composite rasters

  • conversion between color models:  RGB, HIS, HBS, CMY, CMYK, ...

Tutorials.  All available effort on the Getting Started booklets will be focused on bringing the existing tutorials concurrent with the features in this version.  At least one new booklet entitled Introduction to Hyperspectral Imaging will be released (covering the concept of hyperspectral images).

Editorial and Associated News  [by Dr. Lee D. Miller, President]

Hyperspectral Research.

NASA Project Funded.

I am pleased to announce that MicroImages is a partner in one of 10 new NASA projects recently competitively awarded to study the potential uses of hyperspectral imagery.  The following press release in a weekly Space News newspaper in September announced the recipients of these contacts.  [quoting directly from Space News in September.]

Industry To Study NASA Partnerships.

“NASA has selected 10 projects that could be the first step toward new partnerships between the agency and companies that use hyperspectral remote sensing data.”

“The purpose of the Earth Observations Commercial Applications Program—Hyperspectral projects, managed by the Commercial Remote Sensing office at Stenos Space Center, Miss., is to demonstrate whether there is enough overlap between NASA’s scientific uses of hyperspectral information and marketable applications of the same data to form partnerships, Bruce Davis, Commercial Remote Sensing program chief scientist, said in a Sept. 9 telephone interview.”

“The geological, agricultural, environmental and water quality projects awarded Sept. 4 will be conducted by the following companies:  Eastman Kodak Co., Rochester, NY; U.S. Department of Agriculture, Beltsville, MD; Yellowstone Ecosystem Studies, Bozeman, MT; Applied Analysis, Billerica, MA; California State University at Monterey Bay, Seaside, CA; Boeing Information, Space and Defense Systems, Seattle, WA; GDE Systems Inc., San Diego, CA; MTL Systems Inc., Beavercreek, OH; Opto Knowledge Systems Inc., Torrance, CA; and Spectral International, Arvada, CO.  Each two-year project cannot exceed NASA funding of $300,000 per year.”

“NASA needs to see if the data requirements for commercial applications are similar enough to NASA’s scientific requirements to partner with industry, Davis said.”

“The projects will use data from NASA’s Airborne Visible-Infrared Imaging Spectrometer.”

“‘The savings could be significant to NASA if there is an overlap in scientific and commercial requirements.’ Davis said.”

An executive summary of each of these 10 funded proposals can be found at http://www.crsp.ssc.nasa.gov/hyperspectral/partners.htm.  Four of the 10 projects awarded are concerned with investigation of the applications of hyperspectral imagery in precision agriculture.  MicroImages is a partner in the project being administered through California State University at Monterey Bay.  In addition to the SIVA Center at this University, the other partners in this project are Dole Agriculture, the largest grower of lettuce in the United States, and the Ecosystems Branch at the NASA Ames Research Center.  The executive summary for the project in which MicroImages is a participant is attached as Appendix A.

Role of TNTlite.

MicroImages’ participation in this program will be in the development of new hyperspectral analysis software features focused upon precision agricultural applications.  All software features added under this project will be made available to all for free via the normal releases of TNTlite.  This was announced at the NASA sponsored kickoff meeting for the 10 project participants in Denver in October and was accompanied by the distribution of TNTlite 5.9, the attached color plate entitled Free Hyperspectral Analysis, and other materials.  All the participants in this meeting are also being sent a V6.00 CD so that they will also have access to TNTlite 6.0.  Since all 10 of these NASA sponsored projects will be using AVIRIS imagery, they could experiment with the use of TNTlite in parallel to whatever other hyperspectral analysis software they currently use or will develop.

Limited Image Collections.

During the NASA briefings at the October meeting, it was quite surprising to me that no other pending or future sources of hyperspectral imagery during these 2 year projects or later was even discussed, other than continued operations of the AVIRIS.  Questions asked by participating contractors with regard to possible new, future sources of NASA sponsored satellite or aircraft hyperspectral imagery, other than AVIRIS, went unanswered.  It was clear from this briefing that NOAA has an increased interest in the collection of imagery by the AVIRIS one-of-a-kind sensor for coastal studies.  It was also made crystal clear that these 10 projects are only one of several different NASA and NOAA programs which must be serviced by this single JPL based AVIRIS aircraft program.  As a result, image collection missions for next year are heavily oversubscribed by all competing programs, resulting in limited image collection for all experimenters: NOAA, NASA, and others.  Additional details on the situation in the current AVIRIS low-altitude flight program for NASA/NOAA/JPL are provided below in the Hyperspectral Analysis section.

Premature Promotion.

Every single day I come across a new article in which some author claims that hyperspectral imagery has wonderful applications.  If only 2 application areas are mentioned in the article, precision agriculture is one of them.  Yet those of us awarded contracts to do preliminary application research in this area have just started; almost no hyperspectral imagery is available, and no one, including NASA, has announced any serious new initiatives to collect  any.  Clearly, these authors are talking about, promoting, and selling something they know nothing about.  Serious application of hyperspectral imagery in precision farming has some clear requirements which are not even close to being met today by the higher resolution monochromatic or 3 band optical imaging systems about to be orbited or being built.

Agricultural crops grow fast and on their own schedule, and their “sensible” conditions which require management action can onset rapidly and require immediate action.  In other words, clear sky hyperspectral imagery is required at frequent intervals.  Agricultural crops are spread over large land areas.  Current postage stamp sized hyperspectral images that contain several agricultural fields will even fit in TNTlite, but currently are expensive and laboriously collected.  Acquiring and processing such hyperspectral images at frequent intervals over a single agricultural test site of limited area cannot be accomplished even in carefully controlled, funded experiments.  We are not even close to this kind of testing, as the equipment, commitment, analysis tools, and funding to collect the hyperspectral imagery and ground control data at 2 or 3 day intervals is not there even for limited research sites.  Unfortunately, the popular press, and even some scientific journals which should know better, are taking a few isolated research results and blowing them up into what appears to be an immediate and magic utility in precision farming.

Suppose we examine only the area of hyperspectral image analysis, something that we know a little about at MicroImages.  One of the things we can recognize is that almost all the methods which have been developed for the analysis of hyperspectral images originate from testing its applications in geology: mineral exploration and astrogeology in particular.  TNTlite 6.0 now provides most of these methods.  This is important, as MicroImages has many clients in the mineral exploration industry.  They can immediately use these tools and are usually satisfied by a single hyperspectral mission over their project area.  Yet many of the popular assumptions regarding the utility of hyperspectral images in agriculture result from applying these same methods to agricultural crops.

Agricultural Imagery Available.

Fortunately, by careful calculated design, the project in which MicroImages is participating judiciously chose agricultural sites in the Salinas Valley south of San Francisco.  The AVIRIS program, sensor, and aircraft are based at Dryden Air Force Base just south of this valley, so more frequent overflights, even during AVIRIS engineering tests, might occur.  This valley is also where some of the most valuable crops in the United States (per acre) are cultivated year round.  This planned proximity has already provided the opportunity for the collection of usable imagery from AVIRIS flights in October and scheduled for next April.  The other three precision agricultural projects funded for study by NASA with AVIRIS are located in such areas as Nebraska, Iowa, Indiana, and Illinois.  No crops were growing in these areas during the September/October low-altitude flight program or will be next April (except winter wheat).  While far from providing for multitemporal analysis, these isolated hyperspectral images are of good quality and will provide the basis for the experimenting with analysis methods specifically oriented toward precision agriculture.

Multisensor Fusion.

MicroImages is now a software development participant in 2 important NASA sponsored experiments related to remote sensor applications in precision agriculture: high resolution SAR and hyperspectral image analysis.  Both NASA sponsored aircraft programs are operated by JPL from Dryden (AIRSAR and AVIRIS) and collected high-resolution imagery in October of the same agricultural test site in the Salinas Valley, California at a resolution of from 5 to 8 meters.  Currently the AVIRIS program contract group at JPL is processing these hyperspectral images to remove some of the aircraft and scanner induced geometric distortion.  The AIRSAR program contract group at JPL has already processed and provided several SAR images and the digital elevation maps extracted from them.   While this sample image set was collected for this project, as with all NASA sponsored imagery, it is available for anyone to experiment with in TNTlite after it is processed into usable images from JPL.  All of these images are posted almost immediately (within a week or 2) for downloading from these programs on the JPL web site.

MicroImages will work toward fusing these images of differing geometry into a composite set.  As this project progresses, at some future date, these images will be provided in some manner in a Project File format for your use.  The SIVA Center at California State University has also already assembled—in Project Files—an excellent and unique set of supporting geospatial materials for the Valley and test site.  These materials include crop maps, 1 meter resolution DOQQ of the Valley, the DEMs for these DOQQs,  a Landsat TM image, a SPOT color-infrared image collected within a couple days of one of the SAR images, scanned topographic maps, and a large collection of color-infrared aircraft images.

Precision Ranching Proposal Pending.

MicroImages also recently participated in the preparation of another pending joint proposal to NASA about which no funding decision has yet been announced.  There were over 100 proposals submitted to this program for a potential reward of 10 projects.

This proposed development project concerns experimentation in the practical application of satellite images in precision ranching operations.  The title of the proposal is Connecting NASA’s Earth-Science-Enterprise Space Assets to Resource Management Needs in Precision Range and Regional Agriculture.  This proposal was submitted to NASA via California State University at Fresno.  It has many participants, all of whom are using TNTmips.  Other project participants are located at California State University at Monterey Bay in California, the Peace Pipe Ranch in Texas, the University of Nevada at Reno in Nevada, and New South Wales Agriculture in Australia.

This project would address the test application of Landsat 7, EO1, and MODIS imagery in precision ranching and regional agricultural inventory.  MicroImages’ participation would be in the development of additional software for precision processing of Landsat 7 TM imagery, image web site development, and APPLIDATs.  As always, all these advances would become available via MicroImages’ TNT product line.  Appendix B contains MicroImages’ official letter of commitment submitted to NASA as part of this proposal.  It reviews some of the 30 year background remote sensing activities leading up to the submission of this kind of proposal.

Producing Finished Products.

The term geospatial analysis is becoming more commonly used to describe what it is used for at MicroImages—a higher level synergistic synthesis of component technologies such as image processing, GIS, CAD, surface modeling, and related software advances that deal with area.  Low-cost software products are now available that specialize in focused or dedicated applications of combinations of images, vectors, CAD and/or databases.  These products are introducing the term geospatial analysis and its association with the integrated use of these heretofore isolated data types to a much wider base of computer users.

Perhaps you are one of those preparing geospatial data, products, and results for others in your company, organization, or for sale.  Perhaps you are the end user of geospatial products and are focused upon the use of the information they provide in making decisions or proceeding on to further analysis in other kinds of software.  In any case, as geospatial analysis matures, we all have an increasing need to integrate its results into other products.

The extensive import and export capabilities of TNTmips are one response to the need to become part of a larger whole.  However, they are often used to jockey geodata in and out of other software systems to get a geodata base assembled, and more and more, to solve a problem not possible in competing systems.  The TNT products provide a more general purpose import and export functionality than our immediate competitors.  In fact, recently we have concentrated on making the use of TNT products more efficient by spending considerable software engineering time and skills figuring out how to import, export, and link to our competitors’ native, internal formats such as ESRI’s E00 and coverage and MapInfo’s TAB.  If our products are being used to create geodata (for example, TNTedit) or to solve problems in these systems, then dealing with their native formats make this easier for you.

As geospatial analysis spreads, there is also an increasing demand to produce finished results, such as high quality illustrated technical reports; fancy physical and electronic maps; geospatially oriented web sites; use of, and movement to and from, institutional databases; further analysis in spreadsheets and statistical packages; and so on.  In many applications, general geospatial analysis and its component software are only beginning to produce results that can be moved out of this special activity area for further analysis.  For example, areas measured from airphotos are more widely used now that orthophotos provide the basis for accurate measurement.

Geodata and the results of its analysis can be moved in and out of the TNT products to assist you in continuing on with its use in other popular products.  To help you recognize and better understand these interfaces, I charged Dr. Merri Skrdla this quarter with creating a special Getting Started tutorial booklet on the specific topic Sharing Geodata with other Popular Products, even though separate booklets already exist on acquiring, importing, and exporting geodata.  All these other booklets focus primarily upon moving around the spatial data and its attributes.

This new booklet should assist you in understanding some of the capabilities in the TNT products that can be used to exchange data with other popular software.  Some of the procedures introduced are outlined below.

  • How to do screen captures on the Mac or Windows platforms.

  • How to capture windows into TIFF from within the X server.

  • Using these TIFF files in your word, composition, or graphics program.

  • Sharing text with other products.

  • Converting TNT layouts into Adobe Illustrator.

  • Converting TNT layouts into Adobe PDF format is now also available but was a last minute addition to the TNT products and is not yet covered in this booklet.

  • Using 3D simulations in Microsoft’s PowerPoint.

  • Directly linking to and using dBASE tables.

  • Using Microsoft Excel with TNT database tables.

  • Using other databases via ODBC.

  • Directly editing ESRI’s Arc/Info Coverage and E00 files.

  • Directly editing ESRI’s ArcView shapefiles.

Do competing products do most of these things?  I believe you will or have found that they do not.  These kinds of procedures are being introduced into TNTmips to assist you in producing better final total project results.  If they are not everything you want in this area, let us know and we will work at it.  However, please remember that TNTmips is a hybrid, cross platform product and not a Microsoft Windows product.  Some of the things that are available for easy and automatic incorporation into true Windows products cannot currently be accomplished in the TNT products, such as the Window OLE concept, and will thus take some time and re-engineering.

Need for More Software?

Computer magazine writers are constantly claiming that many of the original and common computer products such as word processors, spread sheets, and so on are “topped out” and mature, and “oh whoa”, where will we turn for the next killer application to drive the computer hardware and software industry.  Yet, these products still contain software errors which lose my work.  Furthermore, these non-technical writing types have never tried to edit an existing complex CAD drawing or to produce a complex map.  If they did, they would know why we need more computer cycles, drive space, screen area, printer resolution, and everything else.

Geospatial analysis software is in its infancy, and new demands upon it are made every day.  The entire area of 3 or n-dimensional geospatial analysis with its attendant topological complexity is in its infancy.  Certainly 3D geospatial analysis, web tunnel navigation, and the games our children now play all have a symbiotic future and require all the computer resources one can collect.  In the simpler, current 2D world, the problems of conflation (merging together overlapping similar maps of varying date, quality, and similarity) are barely addressed.

Since geospatial analysis is barely defined and beginning, all its serious general purpose creators must continually develop, expand, and create new capabilities.

Software Quality?

State of the Industry?

Perfect software, not a chance!  Nearly perfect, not much of a chance here either.  Not in software in general, and certainly not in a broad based product in a rapidly evolving field such as geospatial analysis.  The launches of Landsat 7, EO-1, and AM-11: at $2 billion, the biggest satellite platform in the EOS program, are all delayed another 6 months due to software errors.  If you are looking for perfect software, give up now.

Simple word processors and Microsoft’s complex operating systems have significant errors in them.  I encounter these errors daily if I do not remember to work around them.  Nobody excuses them to me.  In fact, I know in advance that I have no hope of getting an intermediate fix for them even if I thoroughly document and submit them.  I, like you, have simply had to accept them as reality in a world filled with man-made objects.  Unfortunately, we are continuing to lose ground in this battle.  A recent feature editorial in PC Week addresses this topic:  Attacking the Quality Monster, Microsoft reacts to outcry over buggy releases, patches, ISVs voice platform concerns, December 14, 1998, p18.  It reports that testing labs are finding that Microsoft and other products are increasing in errors; not just total, but on a percentage basis.  The following paragraphs in the article explain the general reasons.

“PC software bugs—or at least the percentage of bugs—are multiplying also because the expectations of users are changing.  The average user is no longer likely to be an early adopter; someone who’s accustomed to figuring our the quirks of an immature product.”

“As PCs become appliances, the typical user has higher expectations for intuitive design and consistent behavior.  When software fails to meet these expectations, companies incur high costs as workers spend their time in training classes or on hold for vendor technical support.”

“At the same time, the common mode of using PC software is evolving from brief work sessions to all-day 24-by-7 operation.  Instead of starting an application, creating or editing a document or other work product, and closing the application, users are much more likely to be running a custom application that supports them throughout the workday.  Subtle defects, such as cumulative leakage of memory and other resources, are more likely to surface under these conditions.  Whether the platform be Windows CD or Windows NT, classes of software defects can surface that might have gone unnoticed in brief desktop sessions.”

“With the changing makeup of the user community, a more demanding environment and commercial incentives increasingly favorable to shipment of second-rate software, it’s no wonder that PC software quality is in crisis.”

“Will Microsoft and other vendors be able to stem the bug tide?  It won’t be easy, at least in the short term.  The challenge of software quality is different from other challenges, such as the sudden emergence of the Internet, that have faced PC software makers in the past.  Quality is not a feature that can be added to a current product:  It is a process, one that begins with product design and concludes long after the product is sold.”

Are Errors Deliberate?

This article also contains a table summarizing the approximate cost to correct software errors during the life of a product which can be summarized as follows:

  • $10 during requirements definition

  • $50 during design

  • $100 during programming

  • $250 during developer fixing

  • $500 during customer testing

  • $1600 when in service

Fortunately, such costs vary from product to product and company to company, primarily related to company size.  But clearly, neither Bill Gates nor I really want to have errors in the software products we ship.  Problem is, neither he nor I have figured out how to eliminate them.  However, we both do get lots of advice on this matter.  I know that I take errors in the TNT products personally, and the MicroImages staff all know how I feel about them.  I also know that I am getting gray hairs from them.  Complex software cannot be made error free by sheer will power or any amount of planning.  It is the creation of a team of human beings working together, and each inevitably adds a little bit of something unanticipated to the final whole.

Problem is, humans create software.  The human brain and reasoning processes are certainly far from perfect.  Why should we expect that the software created by them, and seeking to replace them will be?  In fact, most of the computer code which has been written is far more logical than human reasoning.

Perfection?  This is something we can only strive for and asymptotically approach by our efforts.  In the case of software, it is the combined efforts of client and software vendor, the policies of the software vendor, and the temperament of each client which set the location of this asymptotic goal for that client at perfection or way short of it.  All software will have errors, it is how we work together to fix them that determines most of their impact.  The biweekly upgrades on microimages.com help some of you a lot.  However, they have to be used with care, as the fixing of a specific error can unmask or create other errors, and these biweekly upgrades cannot be extensively tested.

Need to Proceed Pragmatically.

MicroImages now has many clients around the world with different national cultures and personalities.  I see and read a lot of the written communications with you.  Most of you see the 99.99% of the things that are right in your software products in general and TNTmips in particular.  You somehow circumvent the problems that you encounter or work with MicroImages to go through them and accomplish marvelous things we often did not dream of in creating the software.  You get errors, get fixes, and get on with the job.  But yes, there is also that .01% that let the problems they encounter stop them dead for whatever reason.

Periodically, MicroImages receives advice from one of you to slow down the upgrade cycle and check the products more thoroughly.  It has been my observation, after presiding over the 45 releases of the TNT products over 12 years, that this would not necessarily help a great deal.  I am not saying that steps cannot be taken to improve quality, merely that this is not particularly one of them.  These kinds of complex software products are not necessarily improved by delays and further checking.

There are a million ways that one can “put the features together” in TNTmips.  As you find errors in some of these combinations (obscure or prominent), they are patched.  As the release period moves on, there are patches on patches, and the best solution is to release the next version to clean all this up.  But, innovation is essential to compete and meet your requirements, so during that same period the software is altered.  There is only one current version of the TNT products maintained, changed, and rebuilt nightly.  It is impossible, for many technical reasons unique to MicroImages and the TNT product design, to retain and manage two versions.  The result is that the quality of this evolving software is not necessarily proportional to the time spent in testing it, unless all alterations to this one version are halted.

How About Nearly Perfect?

All this is like the silly current practice of running around asking software vendors if their products are year 2000 compliant.  Certainly, we can answer yes about the TNT products as we know what is inside of them.  But, who knows if even the simple dates stored in attributes for geodata you purchase are expressed in a field as 98 or 1998?  Of far more significance, we are directly dependent on the Y2K features in the operating systems we support, and current Windows products are not yet Y2K certified.  Why do you think Bill Gates is now using the name W2000?  To give us confidence?  Or to get us all to buy that confidence during the next year while adding to his fortune?

Early cars had lots of parts which had to work together—and didn’t for long.  I can remember my parents’ first new automobile, and it had plenty of defects.  I remember my early cars and spending lots of time and money fixing their defects.  It took half a century to get it right, with a little discipline introduced by Japanese manufactured cars in the 1960s and 1970s.  They forced the U.S. auto manufacturers to work at getting it right.  Cars still do the same things today, but no one can deny that they do them better each year.  Features thought to be luxuries a couple of years back, such as positive traction, airbags, electronic control systems, and electronic door locks are now important to most of us and generally work well along with everything else in a new car—and for tens of thousands of miles.

Geospatial software development is now in the 1920s.  We are not even sure if we are building the equivalent of a truck, car, airplane, or train, as the market has not yet made this clear.  All we do know is that we are no longer using the equivalent of the GIS steam engine anymore.  We also know that rapid innovation and good service is required to stay alive in the high technology business.  So continue to tolerate and work with us on those errors which inevitably creep in as we strive to meet your rapidly expanding demands for new features.

MI/X (MicroImages’ X Server)

The MI/X server used with the MacOS 8.1 required minor modifications to run with the release of MacOS 8.5.  These changes have been made, and the new version of MI/X is installed as part of V6.00 and is posted for anyone to download at microimages.com.

MI/X has been checked and works with early beta versions of W2000 (alias NT5.0).

Microimages.com now supports almost 2000 direct downloads of MI/X a week, up from 1400 a year ago.  This is a total of about 100,000 downloads for the past year, up from 70,000 the previous year.  In addition, there were 39 new mirror sites registered this quarter, bringing the total of registered, active mirror sites to 130 worldwide.  All these mirror sites store and provide public access to MI/X.  It is reasonable to project that between 0.5 and 1 million people have given MI/X a test run.

MicroImages is planning to release the source code to MI/X soon after V6.00 ships, as time will allow.  It will be released as open source software along the same lines as LINUX, Netscape, and other software.  MicroImages will remain the custodian of the source and master site for the release of new features.

The following is a sampling of the comments provided by users of MI/X.

From Chris Weaver at ctweaver@... on 20 June 1998

“I gotta write a real quick note to say thanks.  I am very impressed with the quality of you x server.  And just to let you know, I go to state [NC State University] and your company has a good name there because of this product.  Thanks.”

From cagney@... on 22 June 1998

“Just a note to say I downloaded your x-server software for w95 today.  I think it’s great, I’ve been looking for a way to unify the hose network, and you just provided that way.  Thank you.  I’ve kept a link to your site to see what else you are using this to bring over to the (*pew*) 95 market.  The network model will rule the world!”

From Paul Gregg at pgregg@ti... on 22 July 1998

“Hi, I downloaded your free X-server today.  Having been an X user for many years and recently been forced to use WinNT platform I’ve been looking around for Windows X-servers to display my Unix programs.”

“I must congratulate you on an excellent product—even better than all of the commercial servers I have tried.”

From Gatot Pramono at p2217069@... on 23 July 1998

“I am able to run UNIX based Arc/Info ver. 7.1 from Windows NT using MicroImages’ X window emulation software.”

From Erwin Bolwit at erwin@... on 23 July 1998

“This afternoon I downloaded MI/X from your server.  I must say that I’m pretty amazed how smoothly it installs and runs for a free program made by one single company.  The reason I downloaded it was that I wanted to test how well the newer Linux desktop system (RedHat fvwm2, KDE, Gnome)would run on it.”

“Most applications I tried run reasonably well and fast over a 10Mb full-duplex ethernet.  But the desktop systems use some features that MI/X doesn’t understand.  The one message I see most is that the SHAPE extension isn’t  understood.  I presume this is because MI/X supports X11R5 and this is an X11R6 command.”

“Given that the Win version was updated in March 1997, I presume you might have stopped developing the (free) version.  As far as I know there is no OpenSource X server available for Windows and Macintosh platforms.  All this is why I am asking you if you have considered or would consider the possibility of making the MI/X sources available under one of the established or your own open source license.  As a reference you might look at Netscape’s website www.mozilla.org, containing the source of their web-browser, or read the popular piece by Eric Raymond, the Cathedral and the Bazaar, also available from the mozilla site.  I think there are enough people interested in keeping the X server up-to-date, and if the license is open enough, this could probably be easily done by merging code from the UNIX free X server, XFree86 (www.xfree86.org).  That way you would have the most up-to-date version of X available to provide to the clients of your other software.”

From Tim Tesh at tetesh@... on 24 July 1998

“Thanks for providing a free Xserver for Windows.  Any chance that you will join the Open Source movement and publish the source for the Xserver.  Seems like it would allow additional free advertisement.  Seems like you would gain a lot of additional programmers to enhance your software.  But I guess it might be a headache too.”

Windows 2000

V6.00 of the TNT products has been tested successfully with the beta version of W2000 (alias NT5.0).

Macintosh

One Step to Finder.

V5.90 required several mouse operations to switch from a TNT product to the Mac desktop or some other suspended program.  This was not a typical operation of using the Finder to switch programs.  MI/X has now been modified so that only a single mouse click can be used to toggle an active TNT product to the background and expose the Finder or some other suspended Mac program.  To activate any program, simply select the desired program in the pull down menu at the right end of the MacOS toolbar.  Your TNT product becomes active (takes over the screen) when “TNTx” is selected on this menu.

Speed and RAM Doubler.

Due to changes in MacOS 8.1 and MacOS 8.5, MicroImages now recommends against using Connectix’s Speed Doubler and Ram Doubler.  They are no longer needed and may create difficulties in some situations.  At this time, V6.00 has no known difficulties operating with any standard Mac extension.  However, as always, fewer extensions means increased performance and fewer errors at the operating system level.

MacOS 8.5.

Minor modifications were made to MI/X to compensate for changes made in MacOS 8.5.  You must use the new V6.00 MI/X to operate with this latest version of the Mac operating system.

Each time Apple has released a new MacOS version in this latest series (8.0, 8.1, 8.5, and earlier), they made claims about how each speeds up PowerMacs, especially those based on the G3 chip.  Unfortunately, MicroImages has found that this is just Apple marketing hype.  Each new MacOS released has performed approximately the same in terms of speed of operation of most commercial software, including the TNT products, on a particular Apple Mac.

Apple’s claims that the G3 300 MHz processor exceeds the performance of the Pentium II at 400 or 450 MHz are simply Apple marketing hype.  Several benchmark reviews have been published in popular magazines, including those devoted only to Mac products.  These have compared systems running real, high performance, benchmark applications such as PhotoShop, 3D visualizations, and so on.  All these benchmark reports have shown that the G3/300 based Mac machines are at parity with about a 333 MHz Pentium II.  MicroImages’ experience using the identical TNT product with identical code produces comparable results.

Each 8.x release has been more reliable than the previous, resulting in fewer random freezes during the operation of Microsoft Word or the TNT products.  However, freezes still occur on official all-Apple Mac equipment even without any application running (operating system level actions only).  Since the MacOS will not be multitasking until the release of MacOS 10, these freezes will produce an ILL-OP and require a complete reboot.  It is futile to blame these kinds of MacOS problems on the TNT products, which may be the applications operated for the longest duration on your Mac.

Increased Performance.

The buffer sizes used by the TNT products have been increased substantially, since most Macs now have adequate memory of 32 MB or more.  The effect of these changes is substantially faster TNT operations that use sequential reading of a significant amount of geodata from a hard drive or CD—for example, reading a single band image from an RVC file.  These same TNT operations will be somewhat slower if you are still using only 16 MB of memory, as virtual memory is more likely to be needed.  Operations that read geodata that is not sequential (requires frequent read head movement) will not be noticeably faster—for example, reading multiple images from a large hyperspectral image.

LINUX

License Characteristics.

The LINUX and SGI versions of TNTmips, TNTedit, and TNTview are now capable of multiple floating licenses using FLEXlm.

The L50 license sold for use with LINUX has no resolution limits and therefore can be used at 1600 by 1280 pixels and with dual monitors.  For information on how to use dual monitors with LINUX please see:

www.ssc.com/lj/issue46/2619.htm/

                        or

www/metrolink.com/support/t543multi.htm/

Hot Performance Reported.

A MicroImages client in Germany has recently submitted the following information about the operation of TNTmips on a LINUX based PC.  MicroImages is not yet able to perform similar tests.

“Dear team of MI,”

“We are working with TNTmips running on Windows NT v. 4.0 for about one year now (former on Windows 95).  A few days ago we tried a new version of Linux (with KDE) and set up TNTlite v. 5.8 (TNTlite works very fine in this ‘environment’).”

“Next we tried some ‘speed tests’ using some lite data from CD-ROM.  In exact: we used raster data and the automatic classification with Fuzzy C Means (Interpret/Raster/Classify/Automatic) and other ‘number crunching’ operations, even import/export as far as possible.”

“We were shocked about the results: working on absolutely identical PCs (300 MHz Pentium II, 128 MB RAM) Linux was in average about 3 - 5 times faster!!!”

“After this we had some very hot discussion about Linux versus Windows NT and decided to make a change to Linux.”

“1) Is it possible to change to Linux?”  [Certainly.]

“2) Do we need a serial Dongle?”  [Yes, but NT can also be used with serial key.]

“3) How much would we have to pay for this?”  [No charge if you have a D50 [1280 by 1024 pixels] that converts without cost to an L50.]

“4) Would this licenses include a higher resolution (1600 X 1280)?”  [Yes, as the LINUX L50 license has no resolution limits and could even be used for multiple monitor systems.]

“We are using TNTmips on two licenses.”

Flexible Performance.

Science, the most prestigious U.S. general science publication, has an article entitled From Army of Hackers, an Upstart Operating System [11 December 1998, pages 1976 to 1980].  This article states:

“‘the end result,’ he says, ‘is that you get software that’s smaller, less buggy, and more stable’—which many computer scientists say is the case for Linux.  The Avalon supercomputer [using multiple PCs and Linux] has been operating for many months now without crashing, reliability that is almost unheard of in the supercomputer world.  Some computer applications for personal computers and workstations also run faster under Linux.  According to a memo, leaked to the public via Internet by an internal source and confirmed as authentic by Microsoft, Netscape’s Navigator Web browser rendered graphics and text ‘at least 30-40% faster’ when it ran in Linux than it did in Microsoft’s own operating system, Windows NT.  Finally, Linux’s small size and speed mean that it runs just fine on less expensive computers, including those with Intel’s older 80486 processor and its clones.”

ESRI’s Position.

The following statement was broadcast on the ESRI list server [ESRI-L@ESRI.COM] on 25 November 1998 by David Maguire, ESRI Director of Product Planning, in response to requests for ESRI to support LINUX.

“Thank you for your questions about Linux.  Currently, ESRI has technology which operates on Windows, UNIX and Macintosh operating systems.  We do not currently support Linux, as I am sure you are aware.  Our position on operating systems, like many things at ESRI, is that we are entirely market driven.  We will support any/all operating systems and technology for which there is a large market.  We are generally reluctant to take on support for new operating systems until we are sure that the technology is mature and that it has a long term future.  This is because it is very expensive for us to port to a new platform and to support it over many years.”  [continues on to explain in more detail]

TNTlite™ 6.0

Increased Raster Size.

The maximum size of the raster objects of any data type which can now be processed in TNTlite is 614 by 512 cells (= 314,368 cells), up from 640 by 480 (= 307,200 cells).  This is the nominal frame size of the images that have been collected by the AVIRIS hyperspectral program.  Remember that these cells can form any rectangle up to a maximum of 1024 lines or columns.

Increased Point Elements.

Previously, the number of records that could occur in each attribute of a database table was increased from 500 to 1500.  Subsequently, these database records might be converted to point elements in a vector object.  To accommodate this and the use of 1500 attribute records, the number of vector points allowed in each vector object has been increased from 500 to 1500.

Distribution Patterns.

Many aspects of TNTlite distribution cannot be traced.  CDs are passed from hand to hand, multiple installations are made from a single CD, electronic transfers are made, and so on.  Shipments and downloads of the Windows versions of TNTlite are now 2.7 to 1 for Windows over all other platforms combined.  This ratio is decreasing as users of other operating systems discover TNTlite—they are harder to reach by traditional means as they are “less dense”, limited to reading specialized magazines and so on.  Windows shipments are 7 to 1 over the Mac.  This trend is down from 8 to 1.

Interest in LINUX is increasing so that there are now as many downloads for LINUX as for MacOS.  LINUX still runs third in total downloads, as the Mac version was available a year earlier when TNTlite was first introduced.  However, the trend is such that the shipment of the LINUX version will exceed Mac from this point further as it is an excellent way for an academic department to adopt TNTlite, and its TNT professional sales are also increasing.

New Distribution Program.

Now that TNTlite is a mature and complete product with excellent supporting materials, it is time to significantly increase its distribution by at least an order of magnitude.  Downloads of TNTlite this year doubled in number from last year.  However, it still requires a lot of perseverance to get TNTlite via the Internet.  As a result, MicroImages is initiating a new TNTlite orientation, pricing, and distribution policy with the shipment of V6.00 of the TNT products.

The significantly improved access to the electronic form of the Getting Started tutorial booklets is explained in detail in a section below.  It is now virtually impossible for the user of TNTlite to be unaware of these booklets and how to acquire and install them from the CD or via downloading.  As a result, it is no longer necessary to push the printed versions of these materials to enrich the experience with, and utility of, TNTlite.  In fact, the on-line Adobe Acrobat Reader providing access to these 45 booklets with their 3000 color illustrations is superior to the black and white fine-print booklets.  Now all that is needed to have a good learning experience with TNTlite is the V6.00 CD attached to a simple delivery card providing some general information.  As a result of this “complete CD” approach, wider use of TNTlite can be made using new, cheap CDs.

New Mass Distribution Prices.

Individual CDs for TNTlite 6.0 are now available at the following prices:

Individual CDs will be shipped anywhere in the world for $10 prepaid, which includes shipping costs by airmail only.

100 CDs can be ordered all at one time for $300 plus shipping by the method you specify.

100 CDs can be ordered before the reproduction run of V6.10 for $200 plus shipping by the method you specify (can be shipped cheaply with your upgrade).

The price of the TNTlite kit containing printed versions of all 45 booklets (1000 pages) is now increased from $40 to $50, including shipping by airmail only, anywhere in the world.  This increase reflects the increased costs of shipping the additional printed booklets which have been added in the past several quarters.

Under this new CD pricing program, there are thousands of advance orders for TNTlite 6.0 via CD.

TNTatlas® 6.0

Geotoolbox.

The new advanced geotoolbox can now be used in TNTatlas.  More measurement tools are provided.  You can use the sketching tool to make a temporary graphical overlay.  An easy user interface is used.  Please see the section below on the Geotoolbox for all the details on this powerful tool available in any new TNTatlas.  You can experiment with these new features in the small TNTatlas incorporated in the new Data Logger APPLIDAT.

GPS Support.

GPS devices can now be set up in a TNTatlas and their coordinates plotted and displayed.  Multiple GPS inputs can be used.  Varying symbolism can be assigned to each device.  The view can scroll automatically with a GPS input.  Please see the section below on new GPS features for all the details on these powerful, automatically available, new features in any TNTatlas.  You can experiment with the use of a GPS in the small TNTatlas incorporated in the new Data Logger APPLIDAT.

Installed Sizes.

Loading TNTatlas 6.0 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.  

in V6.00 in V5.90
PC using W31 18 MB 16 MB
PC using W95 22 MB 19 MB
PC using NT (Intel) 22 MB 19 MB
PC using LINUX (Intel) 20 MB 17 MB
DEC using NT (Alpha) 21 MB 19 MB
Power Mac using MacOS 7.6 and 8.x (PPC) 38 MB 33 MB
Hewlett Packard workstation using HPUX 22 MB 20 MB
SGI workstation via IRIX 25 MB 22 MB
Sun workstation via Solaris 1.x 21 MB 19 MB
Sun workstation via Solaris 2.x 22 MB 20 MB
IBM workstation via AIX 4.x (PPC) 24 MB 21 MB
DEC workstation via UNIX=OSF/1 (Alpha) 25 MB 22 MB

TNTview® 6.0

Changes.

No specific changes were made just for TNTview.  However, many other changes were made in processes provided as part of TNTview.  These changes are explained in detailed descriptions provided in the TNTmips New Features section and in the attached color plates.  The improvements include:

  • improved 3D simulations

  • expanded GPS support

  • extensive SML additions

  • improvements in the visualization process

  • direct access to all applicable Getting Started and related booklets

  • the new geotoolbox

  • hatch pattern fills

When TNTview is installed, icons representing 2 APPLIDATs will also appear.  An explanation of the new Data Logger APPLIDAT can be found in that section below.  You can create and use APPLIDATs and other TurnKey Products via TNTview.

Upgrades.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground.  (+50/each means $50 for each additional quarterly increment.)

TNTview Product Price to upgrade from TNTview:  V5.40
V5.90 V5.80 V5.70 V5.60 V5.50 and earlier
W31, W95, and NT $95 170 225 275 325 +50/each
Mac and PMac $95 170 225 275 325 +50/each
LINUX $95 170 225 275 325 +50/each
DEC/Alpha via NT $125 225 300 350 400 +50/each
UNIX single user $155 280 375 425 475 +50/each

For a point-of-use in all other nations with shipping by air express.  (+50/each means $50 for each additional quarterly increment.)

TNTview Product Price to upgrade from TNTview:  V5.40
V5.90 V5.80 V5.70 V5.60 V5.50 and earlier
W31, W95, and NT $115 205 270 320 370 +50/each
Mac and PMac $115 205 270 320 370 +50/each
LINUX $115 205 270 320 370 +50/each
DEC/Alpha via NT $150 270 360 410 460 +50/each
UNIX single user $185 335 450 500 550 +50/each

Installed Sizes.

Loading TNTview 6.0 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.   

in V6.00 in V5.90
PC using W31 25 MB 23 MB
PC using W95 28 MB 27 MB
PC using NT (Intel) 28 MB 27 MB
PC using LINUX (Intel) 26 MB 22 MB
DEC using NT (Alpha) 27 MB 28 MB
Power Mac using MacOS 7.6 and 8.x (PPC) 44 MB 39 MB
Hewlett Packard workstation using HPUX 31 MB 27 MB
SGI workstation via IRIX 35 MB 31 MB
Sun workstation via Solaris 1.x 29 MB 25 MB
Sun workstation via Solaris 2.x 28 MB 26 MB
IBM workstation via AIX 4.x (PPC) 34 MB 30 MB
DEC workstation via UNIX=OSF/1 (Alpha) 36 MB 32 MB

TNTedit™ 6.0

All the features added to TNTmips in the processes supplied as part of TNTedit have been correspondingly updated.  All the new features in the following major sections apply.  Please review them below:

• System level changes • Use hatch pattern fills
• New 3D Simulations • Expanded GPS Input
• Many new Import/Exports • New interactive label placement tool
• Use the new geotoolbox • Direct access to Getting Started booklets

The most significant single addition to TNTedit is the ability to import MapInfo native format files for editing and the new label point placement tool.

Upgrading.

If you did not order V6.00 of your TNTedit and wish to do so now, please contact MicroImages by FAX, phone, or email to arrange to purchase this upgrade or annual maintenance.  Entering an authorization code when running the installation process allows you to complete the installation and immediately start to use TNTedit 6.00 and the other TNT professional products it provides to you.

If you do not have annual maintenance for TNTedit, you can upgrade to V6.00 via the elective upgrade plan at the cost in the tables below.  Please remember that new features have been added to TNTmips each quarter.  Thus, the older your current version of TNTedit relative to V6.00, the higher your upgrade cost will be.  As usual, there is no additional charge for the upgrade of your special peripheral support features, TNTlink, or TNTsdk which you may have added to your basic TNTedit system.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground.

TNTedit Product Code Price to upgrade from TNTedit:
V5.90 V5.80
D30 to D60 $175 $275
D80 $225 $325
M50 $175 $275
L50 $175 $275
U100 $300 $450

For a point-of-use in all other nations with shipping by air express.

TNTedit Product Code

Price to upgrade from TNTedit:

V5.90 V5.80
D30 to D60 $225 $315
D80 $275 $350
M50 $225 $315
L50 $225 $315
U100 $350 $525

Installed Sizes.

Loading the TNTedit 6.0 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.

in V6.00 in V5.90
PC using W31 46 MB 41 MB
PC using W95 46 MB 50 MB
PC using NT (Intel) 46 MB 50 MB
PC using LINUX (Intel) 40 MB 34 MB
DEC using NT (Alpha) 45 MB 51 MB
Power Mac using MacOS 7.6 and 8.x (PPC) 60 MB 55 MB
Hewlett Packard workstation using HPUX 50 MB 44 MB
SGI workstation via IRIX 60 MB 52 MB
Sun workstation via Solaris 1.x 45 MB 40 MB
Sun workstation via Solaris 2.x 44 MB 40 MB
IBM workstation via AIX 4.x (PPC) 56 MB 50 MB
DEC workstation via UNIX=OSF/1 (Alpha) 61 MB 54 MB

Getting Started Booklets

Introduction.

The collection of Getting Started, Introductory, and miscellaneous booklets continues to expand.  Seven new booklets are being shipped with V6.00.  Currently the series contains 45 booklets, all of which have been provided to you.  These booklets now contain over 1000 pages and 3000 color illustrations.  This is the equivalent of three good sized textbooks of material on geospatial analysis.  As usual, the sample geodata sets used in each booklet have also been included on the CD and at microimages.com.  Almost all of this geodata is sized to be usable with TNTlite.

Direct Access.

All the TNT booklets can now be conveniently viewed or printed in color from within the TNT products.  The Adobe Acrobat Reader, booklets, and sample geodata can all be installed from your TNT product CDs.  If you already have the Acrobat Reader installed, it will be used.  The Display menu and “Getting Started” icon on the TNTedit and TNTlite toolbar provide cascading menus of all the booklet titles.  Choose a booklet, and your TNT product will start the Reader in a separate window and load that booklet.  You can then toggle back and forth between the TNT work windows used in a process and the booklets related to that activity.

Reminder Screens.

For Professional Products.

TNTmips and TNTlite equivalents now all provide two reminder windows when a product is started.  The first is a table-of-contents window which provides a list of current titles of all the booklets.  The second screen provides a brief summary of the features of one booklet and the associated TNT process.  If you select a specific booklet from the table-of-contents window, the second screen will be for that booklet.  If you do not select anything specific, the second screen will be a different booklet each session, eventually rotating through the summary windows for all booklets.  Each detailed summary window provides a hot-link to start the Reader and load that booklet.

Professional clients will notice these two windows only the first time you start a new version of the product and every 20th time that product is restarted.  No means has been provided to bypass these windows or change this time interval.  These windows will simply periodically remind you and your staff of the availability of these valuable aids.

Before these booklets were available, MicroImages found that it would take six months to a year for a new software support specialist to “come up to speed”;  perhaps as long as 12 months for them to achieve the same breadth of knowledge about TNTmips that they can now gain in one month devoted to completing all these tutorials.  If you are the boss, it is particularly important if your staff assigned to work with TNTmips or TNTlite can take the first month to go through each tutorial.  Even if they are experienced in using some other GIS or IPS software, this month will pay handsome dividends in their speed, but more importantly the breadth, of what they will accomplish for you.  If you are in a hurry, you might consider paying them a bonus for each booklet they complete at home.

For TNTlite.

TNTlite users can now acquire the product in many different ways:  borrow a CD, as a download from microimages.com or a mirror site, on a CD prepared by someone else, and so on.  Often these valuable booklets and associated geodata sets and all reference to them are being left behind in these transactions.  For example, educational institutions have already prepared course CDs with a selected version of TNTlite and left off the booklets.

MicroImages created TNTlite to help students and professionals learn how to use the tools of geospatial analysis.  Subsequently, these tutorial booklets have been created at considerable expense by several experienced Ph.D. level educators who are professional writers at MicroImages.  Increasing the circulation of TNTlite by electronic methods without including these tutorial materials does not provide a good basis for learning geospatial analysis.  As a result, no matter how TNTlite is acquired, its user will now be presented with these two windows every time they start that TNT product.  In this manner, they will be constantly reminded of the content, availability, and the means by which tutorial materials can be acquired and used.  Thus, if the electronic versions of the booklets were not forwarded, each new user will now be advised at startup that they exist and can be obtained from their provider of TNTlite, an official MicroImages CD, or by download from microimages.com.

Distribution Changes.

Easy electronic access to extensive color material provided in these booklets and common availability of color printers has greatly reduced demand for black and white printed versions, which rapidly go out of date.  All booklets are now being distributed on the CD with each release.  Even more recent versions of the TNT booklets that have changed can be found on microimages.com, and increased efforts are being made to keep them up-to-date.  As a consequence, after V6.00, MicroImages will no longer distribute each new and updated booklet in printed form.  Some of the original booklets you have are now more than a year old, and it is logical to use the newer electronic versions.

NOTE:  Printed black and white versions of new and updated Getting Started, Introductory, and miscellaneous booklets will no longer be shipped with future TNT product upgrades after V6.00.

New TNT products will still be supplied with a complete set of the most recent black and white printed booklets.  This will help new users get an immediate overview of the capabilities of their product and ease the transition into the use of the electronic format.  Some people also still seem to want to evaluate a new product by weight rather than utility.  Similarly, new TNTlite kits containing every booklet will also continue to be available at $50 each including postage to anywhere in the world.  At this point, this fee pays for the reproduction of the booklets and their shipping, as the TNT CD included is of negligible cost.

Status of Booklets.

Previously Completed Booklets.  [38 booklets already in your possession]

Introduction to TNTlite Surface Modeling
Displaying Geospatial Data Georeferencing
Feature Mapping Theme Mapping
Editing Vector Geodata Image Classification
Editing Raster Geodata  Navigating
Making Map Layouts Mosaicking Raster Geodata
Importing Geodata  Building and Using Queries
3D Perspective Visualization Interactive Region Analysis
Pin Mapping Acquiring Geodata
Managing Relational Databases Making DEMs and Orthoimages
Style Manual Vector Analysis Operations
Spatial Manipulation Language Using Geospatial Formulas
Exporting Geodata  Creating and Using Styles
Editing CAD Geodata Filtering Images
Editing TIN Geodata Getting Good Color
Combining Rasters Sketching and Measuring
Digitizing Soil Maps Managing Geoattributes
Rectifying Images Constructing a HyperIndex
Introduction to Map Projections Changing Languages (Localization)

New V6.00 Booklets.  [7 new booklets shipping]

Analyzing Hyperspectral Images TNT Technical Characteristics
Network Analysis Sharing Geodata with other Popular Products
Windows 3.1x, 95/98, and NT:  Installation and Setup Guide
Macintosh: Installation and Setup Guide
Windows 3.1:  Optimizing Windows 3.1x

Out-of-Date Booklets.

Many earlier booklets were updated for V6.00, but not all (34 booklets are current).  The following Getting Started booklets, while very useful, are still out-of-date, as they match an earlier version of the TNT products.

Sketching and Measuring needs major revisions
Using SML needs major revisions
Making Map Layouts needs major revisions
Managing Relational Databases miscellaneous changes
Interactive Region Analysis add geotoolbox changes
Vector Analysis Operations line to line combinations
Displaying Geospatial Data add simulations
Feature Mapping rewrite hole filling
Creating and Using Styles add hatch patterns
Making DEMs and Orthoimages miscellaneous changes
Changing Languages (Localization) add new utilities

Possible Future Booklets.  [19 possible units]

Priority is now being placed on upgrading all existing booklets to be current with each new release.  As a result, it is not possible to predict if any new booklets will be produced in a given time period.  The following is simply a list of topics for new booklets.

UNIX:  Installation and Setup Guide Enterprise Installations
Scanning Vectorizing Scans
Using the Software Development Kit Surface Analysis Operations
Using the Electronic Manual Introduction to Hazard Modeling
Modeling Watersheds and Viewsheds Extracting Geodata
Introduction to APPLIDATs Introduction to Remote Sensing
Introduction to GIS Introduction to RADAR Interpretation
Introduction to Hyperspectral Imaging COGO
Introduction to Digital Photointerpretation
Introduction to Creating Management Zones for Precision Farming
Introduction to PseudoDOQs from 35 mm Slides

TNT Reference Manual

Status.

The Reference Manual this quarter has 2585 single spaced pages distributed as:

• Basic System Operations 195 pages • Display 695 pages
• Edit 310 pages • Process 1168 pages
• Support 90 pages • Glossary 92 pages
• Appendix 35 pages 2585 total

The HTML version of the Reference Manual installs into 35 MB with the illustrations or into 7 MB without them.  The Microsoft Word version of the Manual is 76 MB.  Last minute supplemental sections which do not occur in the on-line HTML version or Microsoft Word version were created for new processes and features.  These sections were completed for V6.00 after the master CDs were created for the reproduction process.  These 24 additional pages are included in supplemental, printed form as follows.

Hatch Pattern Editor (8 pages)
Polygon Fitting (13 pages)
GPS Log File (3 pages)

New TNT Features

* Paragraphs or main sections preceded by this symbol “*” introduce significant new processes or features in existing processes released for the first time in TNTmips 6.0.

* System Level Changes.

System.

Installation.  Installation instructions for the Windows and Mac versions of the TNT products have been updated.  Booklets entitled:

          Windows 3.1x, 95/98, and NT:  Installation and Setup Guide
          Macintosh:  Installation and Setup Guide
          Windows 3.1:  Optimizing Windows 3.1x

are being shipped with V6.00 and new product shipments.

ToolTips/DataTips.  The color can now be set for the text, background, and border used for ToolTips and DataTips.  These color settings are global (for example, select 1 color of text for use everywhere).  They can now be changed so that you can make your Tips easier to see on a portable, in varying lighting conditions, or on a particular display’s color scheme.  Use the Setup/Preferences dialog to set your preferred colors.

Adobe Acrobat Reader.  The TNT products now provide direct access to all the Getting Started and other booklets directly from the menu bar.  The Reader and all the booklets can now be installed as part of your TNT product in the same TNT directory.  Optionally, only the Reader can be installed and the booklets and associated geodata accessed directly from the V6.00 CD if this slower performance is tolerable.  Access to these booklets is direct.  When you select a booklet from the menu, TNT will automatically start the Reader in a new window (if is not already running) and open that booklet.  You can then review the booklet while you experiment with the corresponding features in TNT.

Introducing DLLs.  Early in the preparation for V6.00, work was undertaken to reduce all the RVC Project File functionality to Dynamic Linked Libraries (DLLs) and shared libraries (the equivalent on the UNIX platforms).  This work is a prerequisite to the development of a TNTatlas-like web server based upon the RVC Project File concept.  To support this, lower level DLLs have been created for map projections, georeferencing, region analysis, and other commonly used shared features.  The integration of these DLLs into the TNT products required checking of the many processes which use them to make sure they are correctly generalized.

These DLL and shared libraries provide the basis for further integration of the TNT processes.  Gradually, clients are requesting that every common feature be available in every process (for example, automatic map projection reconciliation, region creation, measurement tools, GPS access, and so on).  Attempting to accommodate these requests has been bloating each process by the replication of these code sections with corresponding increases in load times.  This creates problems in maintaining the duplicate code and checking it in each process by individual software engineers (especially when the main process is not actively being revised).  Using DLLs and shared libraries means that TNT features common to several processes can be loaded once by the first process to use them and then shared by subsequent processes.

Significance of DLLs.  What is the impact of this on the TNT products?  The most obvious result is that the installed size of TNTmips 6.0 for W95, W98, and NT4.0 will be reduced by at least 21 megabytes.  Even further overall size reductions may occur in future versions as more integration takes place.  Eventually, this means faster switching between processes, as the shared libraries will already be in memory.  Gradually, more of the features we all consider standard in TNTmips will be in every process as they are rewritten to use these shared libraries.  Shared libraries mean that less code has to be maintained, reducing errors.  Shared libraries mean that errors in shared portions of the code of a TNT product will be detected at MicroImages in any process tested rather than being hidden away in some infrequently used approach.

Historical Perspective.  It required 4 to 5 years for MicroImages to figure out and implement the scope and makeup of the original DOS based MIPS image processing system.  In the final years of work on the DOS MIPS product (version 3.x), we began to make it smaller, more efficient, and more reliable, while still adding features.  At that point in time, in response to the release of Microsoft Windows, MicroImages chose to expand our product line to encompass GIS as well as image processing and to support all popular platforms.  After 6 years of development and changes, TNTmips has become a general geospatial analysis system, providing a wide variety of capabilities requested and required by a diverse clientele.

Evolution.  Today’s TNTmips has new, widely used features that we did not even know we needed 6 years ago.  For example, the surface fitting capabilities of TNTmips are neither GIS nor image processing, but are now commonly used to bridge the gap between database, vector, and raster geodata uses.  Network analysis is only now being fully developed in TNTmips.  New opportunities are occurring with the promise of hyperspectral image analysis.  Connecting to, and easily sharing geodata with other common commercial products (Oracle, Access, Excel, Illustrator, ...) is a recent requirement.  As a result, the TNT products now offer the most comprehensive geospatial analysis system available.  As a further result, as with DOS MIPS, more energy can be addressed to increasing the efficiency and reliability while actually shrinking the size of these products.

Project File.

Project File and object size limitations have been lifted on certain platforms.  Many clients were beginning to approach the previous 2 gigabyte limit originally imposed on Project File and object size by Windows 3.1.  Project File (RVC) size in V6.00 is now 16 Terabytes, and the limit on object size is 4 Terabytes.  The limits stated are in effect for the Windows 95/98/NT and DEC UNIX platforms.  For Project Files created on FAT32 partition in Windows 95/98/NT, the limit is 4 Gigabytes due to FAT32 format limitations.  All other platforms are limited to 2 Gigabytes in both Project File and object sizes.  MicroImages is investigating which other platforms’ operating systems will support greater than 2-Gigabyte files.

If an object is usable in TNTlite, but has subobjects that are not usable, this will be reported.  For example, the attributes cannot be accumulated so that they exceed 1500 per vector object.

TrueType.

In previous versions of the TNT products, you were using a TrueType rendering engine which was written by MicroImages.  V6.00 now uses an improved TrueType engine called FreeType which is distributed on the Internet as open software. This new engine supports character hinting and smoothing, which a number of clients have requested.  These special rendering features will make your text look better (equivalent at least to Windows’ rendering) in small sizes on both the display and prints.  It also renders slightly faster than the previous method.  As distributed, FreeType did not support Macintosh TrueType fonts.  MicroImages modified the open software to add Macintosh font support for use in TNT.  The source code for these Macintosh related modifications has also been transmitted to the FreeType group for integration into the next release of FreeType.

* HTML Built-In (new prototype feature).

The HyperText Markup Language (HTML) has become a widely used standard for designing web sites using forms, pages, graphics, links, and so on.  As a result, many of you are now familiar with it and using it in some form.  MicroImages has added an HTML interpreter into the TNT products.  You can use it in V6.00 as a function in SML for creating instructions (see SML section below).  It is also being introduced as a means of creating windows and forms within the TNT products.  The HTML interpreter used was obtained as open source code via the Internet.  It was then adapted for use in the TNT products.  Applications include changing it from using BDF fonts to using TrueType fonts (using the FreeType rendering engine) and functionalizing it for use in SML.

In Start-up Windows.  This new HTML interpreter is used for the two new windows that now show every time TNTlite is started.  They provide knowledge of, and direct links to (access to), the Getting Started booklets.  Learners in TNTlite need constant refocusing on these written materials or they will overwhelm MicroImages’ software support with questions that are already addressed thoroughly in these booklets.  These same HTML generated windows will also show once every 20 times your TNT professional product is started as reminders of the availability of these materials.

For On-Line Help.  MicroImages has other immediate plans for the use of HTML.  Currently, it is difficult to maintain the contents of the on-line help at the software engineering level, where it is currently being haphazardly inserted.  It is also difficult to internationalize the current help system so that its contents can be translated.  The HTML interpreter will form the basis for a new help system in V6.10.  In this system, the help instructions will be added by the scientific writers by editing a collection of HTML files, which could also be translated, just as the interface resource files are being translated.

Within the new system, provision will be made to allow you to add your own help instructions.  Many of you have requested a means to incorporate personal instructions and notes into a TNT product as you master a process or activity.  The TNT products are so wide-ranging that it is easy to forget exactly how you used a particular process several months before.  This new help system will enable you to embed and retain personal instructions describing precisely how you used a particular TNT feature.  You may not know or want to learn HTML, so you will also be able to simply enter straight text as your notes.  Even if you wish to be a bit more elaborate in the layout of your personal notes, you can avoid learning HTML.  Simply open a concurrent session of Word or a similar product and save your note design from it into this help system as an HTML file.

In Layouts.  Another objective in the use of this new interpreter would be to create HTML formatted text layers for direct use in your layouts.  Again, you would be able to open and use word processors and HTML software tools to create your text materials and would not have to learn to encode HTML to use this approach.

Map Projections.

The “Gauss-Kruger” projection (similar to UTM) has been added as a standard, built-in coordinate system.

The “Rectified Skew Orthomorphic” projection is now available, and is simply an alternate name for the “Oblique Mercator” projection.

Display Spatial Data.

General.

* Multiple View Cursors.  View-to-view cursor tracking has been added.  When this option is selected, the position corresponding to the mouse cursor in any view is shown in all other open views except 3D views.  Use it to automatically locate features in another view containing georeferenced materials of a quite different appearance.  For example, point to a feature in an image and find it on a map.

Groups and Layouts.  Layouts can now be directly rendered into a raster.  For example, use this to convert layouts containing high-resolution 3D perspective views into raster form.

When a previously-saved “group” is added to a layout, it can be added as either a 2D or a 3D group.

Coordinate Readouts.  An icon now indicates the current source for the coordinates showing in the lower right hand corner of the view.  The icon is just to the left of these coordinates.  It changes to indicate that the coordinates are coming from the mouse position, the GPS position, the center of the view, or those you manually entered.

Vector Smoothing.  Vectors that are being warped as read from an object in a different projection are now rendered as curves to create a more accurate display.

Panning View.  The Pan View icon in the View windows has let you pull out a line to specify the starting and ending positions of the desired panning.  The line can then be relocated by moving the mouse and selecting either the “plus” or “X” end of the line.  The default for this tool no longer lets you relocate the line and automatically pans the view as soon as you release the mouse after drawing out the line.  The previous manual draw and edit mode are still available, which use the right mouse button to initiate the redraw.  You can change modes from automatic back to manual in the View tab panel on the Options window on the display toolbar.

Element Selection.

An expanded, general purpose selection tool has been integrated into the new “Geotoolbox” as described below.  It requires a couple of steps to gain access to it and performs new functions.  Replacing it for easier access is a simple single-point selection tool.  This selection tool is provided by the same red arrow icon as before and works identically to the previous single-point selection tool.  This simple, single-point tool has no dialog window, as there are no options to set.

The more complex multiple element selection tool is still available but has been moved into the Control window.

All elements from the active layer are automatically enabled for selection by any tool by default if no elements are enabled in any layer.  This eliminates the confusion experienced by beginning users in discovering how to make elements selectable.  This auto-selection feature is also the default used for any selections made via the new Geotoolbox.

Raster Layers.

Contrast in Regions.  Contrast enhancement tools can now be used interactively to adjust the contrast in any raster layer based upon the histogram of any subportion of the image.  Simply use any of the area tools in the new Geotoolbox window to create a subregion (continuous or disjointed) in a raster area.  Then use the histogram of this region as input to the interactive contrast adjustment tools and apply this back to the total image.  The new “Update Contrast” button will compute the new contrast based upon the area of the current region and apply it to the active layer.

An example will illustrate the use of this approach.  Assume you are viewing a grayscale image with a wide contrast range because it contains large areas of both land and water features, yet few tone details are displayed in the water area.  You could work with the general tools to recontrast the whole image.  With this new approach, simply draw a region around a portion of the water area, press the “Update Contrast” button, and apply a normalized contrast to the temporary histogram for the region area only.  When the view is redrawn, the water area has the tone details, and the land area will be saturated.  It is even more instructive to create a region that contains a single, generally uniform toned agricultural field in the land area of this same display.  Then normalize this field’s histogram to redraw the entire image, bringing up in-field details in this and similar agricultural fields at the expense of almost all other features in the view.

* Color Palettes.  Are you tired of making color stretches and palettes of your own?  Now a suite of interesting color palettes is available.  Simply pick a palette by name from the list and apply it to the current grayscale image, DEM, or other raster layer.  These prepared palettes are automatically interpolated so they can be applied to 8-bit, 16-bit, or other raster data types.  Good DEMs are 16-bit, so this is a particularly useful feature.  These palettes were recreated from color schemes used in other software products.  If you know of any interesting sources of single band color schemes you would like to have added to this list, please bring them to MicroImages’ attention, as they are not difficult to recreate and incorporate.  Options are also now available to reverse and negate the current palette.

Fine Tuning Color Balance.  The color-balancing tool with various color model sliders can be used to fine tune the specific colors in a display or a print.  For example, you have been using this tool if you wanted to improve the green of those features already showing in some less suitable shade of green.  Previously, if the view was made by selecting a color palette, you could also fine tune the colors, but these adjustments were lost.   In V6.00, the new, revised color balance you create can now be saved.  This allows other processes such as TNTatlas to use your finely adjusted color balances.

Auto-Color Map Creation.  Several years ago, a process was provided in TNT to display a single band, 8-bit cluster map (from unsupervised processing) as pseudo color-infrared, natural color, each original band, and any other desired color combination.  Using this procedure, you can create a single 8-bit image from many multispectral images and view it over and over in many different pseudo color schemes.  While this process is very powerful, it is little used.

This procedure has been modified so that it will work with 16-bit or any other datatype as input images and for preparing color maps for viewing the single new 16-bit classification image with these pseudo color schemes.  Now this process can be used to prepare many different color schemes for use in viewing a 16-bit classification map (64K possible classes) prepared from many 16-bit spectral bands making up hyperspectral images.

There are many new, different, and unique color schemes that can be selected to display hyperspectral images in RGB.  V6.00 provides a completely new visualization tool called the Hyperspectral Explorer to quickly test only a few of the possible combinations of 3 images selected from 256 spectral bands to display in RGB.  These expanded, unique autocolor map procedures in TNTmips allow a single 16-bit classification map to be created and viewed in as many of these experimental color schemes as desired.

Standard Raster Statistics.  The standard raster statistics (including the histogram table) can be saved to a text file from the histogram dialog.

Theme Mapping.

The dialog used in theme map creation now allows class ranges to be defined that are outside the actual data range.  This permits the creation of a suite of comparable theme maps from multiple datasets where the class ranges are consistent for all maps.

Perspective Views.

An alternative, simpler 3D view control has been added which provides control of only key parameters.  The biomass APPLIDAT first released in V5.90 now uses it for a simpler, more intuitive 3D viewing operation.  The older, more complete control panel giving more control over the view is still available as an option.

Histogram View.

The total number of cells in the histogram is now shown.  This is especially useful when the histogram has been computed for an area defined in the new Geotoolbox.

* 3D Simulator.

Terrain Following.

A flight path can be selected that follows the surface at a fixed altitude above its terrain.  The relative smoothness or discontinuity of the surface determines how “bumpy” the flight will be.  Some sort of smoothing of the flight path will eventually be added.  It has proven difficult to find an appropriate procedure, since common splining in the z axis can fly you into the terrain at low altitudes.

A very good quality 7 MB MPEG movie simulating a flight at a fixed altitude and also using curve smoothing, foreground smoothing, and despeckling can be downloaded from /featupd/v59/mpeg/kern3.mpg.  This movie is made from a LANDSAT TM image (bands 5, 4, and 3: “infragreen” color rendering vegetation in bright green) overlaid upon a resampled USGS 3-arcsecond DEM.  It flies generally west to east over the California Sierra Nevada Mountains, around Mount Whitney, and over the continental divide (all S.E. of San Francisco).

Smoothing Turns.

Turns are now “smoothed”.  This eliminates “jerky” turns when following a curved path in the XY plane.  You can set the maximum turn rate, which will control how far in terms of angle of rotation that your view will jump at a given ground speed.  When you decrease the maximum turn rate, the turn begins earlier and ends later in the path.  A 5 MB sample movie in the Grand Canyon to illustrate this feature can be downloaded from /featupd/v59/mpeg/grande.mpg.

Foreground Smoothing.

Smoothing is now available to significantly improve the appearance of large cells passing under your view in the foreground.  These blocky foreground cells occur when you are using a draped image of low resolution (for example, Landsat TM) and are flying close to the ground.  These distortions are effectively removed by this option, which adds about 10% more to the time required to render an MPEG movie.

Background Despeckling.

Remove it.  “Speckling” or “shimmering” can occur in the far distant portion of your simulation in complex ground areas, along the sky line, and at the edge of distinct features.  This speckling is now significantly reduced by a new option which averages it out in the distant features in the view.  However, selecting this option significantly increases the computation time of an MPEG movie by a factor of as much as 3 times.

What is it?  Pixel cells produced in the far portion of a perspective simulation near the horizon can show speckling during playback.  You have already noticed that the shimmering effect is most pronounced when the image area in the distance has a lot of high resolution differences in color/brightness.  This rapid changing of the color/brightness for image cells chosen for display in the distance causes the speckling effect you see when the movie is played back at a high frame rate.

These artifacts occur because each single cell displayed for a distant portion of a frame covers a large ground area.  For example, each single display cell near the far horizon might represent a 5 by 5 array of original image cells.  V5.90 simply picked one of these image cells and used it in the frame.  The next frame rendered always has a slightly different view angle of the terrain and image even when flying in a straight line.  Thus, what you see as the same display cell can end up as the selection of an adjacent image cell which is quite different in color and brightness from the one selected for the previous or next frame.

Is it a common problem?  MicroImages has searched the Internet and reviewed real image movies prepared by other systems.  This speckling occurs to a greater or lesser extent in all simulations found that were produced with real, high-resolution image overlays.  It does not markedly occur in simulations that are totally computer generated (no real images), color elevation or other surface simulations, simulations made with maps, and so on.  Many of these simulation products use a polygon shading approach which is not applicable to real image rasters.  Speckling has been found in every simulation we found which was created by our direct competitors (PCI, ERDAS, ESRI, ...).  Obviously, mitigating its effect is a challenge for all image processing systems designed to deal with real world imagery using rasters for simulation.

What can be done next?  V6.00 provides reduction of speckling by a brute force procedure of using a moving average of those far view image cells subtended by the display cell needed in the current frame.  This constant moving window recomputation for each frame will cause significant slowdown in the rendering of your MPEG frames.  At present, the simulation computations in the TNT products do not use the pyramid layers in the Project Files.  But, these layers already contain smoothed and resampled images of varying resolution which could be used to simply select the cell of the proper dimensions for each display cell as a function of its distance from the viewpoint.  Incorporation of these unique pyramid layers to control speckling will be attempted in V6.10.  If it produces good results, it has the additional unique property of providing for faster rendering of each frame than the current method, with or without despeckling.  This rendering speed improvement would result from using smaller pyramid layers for distant portions of the view.

Special Purpose Paths.

Orbiting.

What is it?  The incorporation of smoothing and despeckling options provided the basis for some new and particularly useful special effect simulations.  Orbiting a point will create a simulation that views the point from all angles over a 360 degree path at the altitude you select.  Simply draw a circle in your 2D view of the layers, or choose a point and set a radius.  When the MPEG movie is available, set your viewer to run this MPEG movie over and over, and you will be presented with smooth rotation of the terrain being viewed.  A 360 degree MPEG movie (15 MB file) is provided on the “A” CD in Litedata/mpeg.  It orbits the top of Mount Whitney in the California Sierra Nevada Mountains (area S.E. of San Francisco).  It is also located at http://www.micro-images.com/featupd/v59/mpeg/whitorb4.mpg.  A color plate is attached entitled New 3D Simulation Motion Types to illustrate the setup of an orbiting simulation.

Why use it?  Why is an orbit movie interesting in our geospatial analysis?  Human beings with narrow eye stereo bases get most of their stereo effects beyond a few feet by how objects move relative to each other.  Yes, by now we have all seen the flyby type of simulations where stereo clues and depth perception are enhanced.  However, these static flyby movies are large, use fixed paths, and can even confuse your sense of the terrain viewed.  More importantly, it is not possible to use them to integrate a mental model of the terrain, topography, vegetation, and other surface features that are viewed.

You will find that orbiting an area in a simulation allows the viewer’s mind to build up a good 3D model of what is being viewed.  If you study the Sierra mountain top simulation for a couple of minutes, you will soon begin to understand its topography, vegetation cover, drainage, and other physiographic characteristics in detail in a way not possible with a 2D, static 3D perspective view, or a flyby.  This is particularly useful when your client is not used to seeing things in plan view or is not familiar or uncomfortable with seeing in stereo with anaglyph or other types of viewing aids.  This 3D mental model building is also very useful when viewed in common in public sessions by large groups of people using a video projector.  In these situations, it is easy to verbally describe and use a pointer to discuss stereo relationships that can be viewed and understood by all.  The only discomfort seems to occur when you shut off the simulation and the view seems to continue to rotate for a couple of seconds more.

Oscillating.

Another step can be taken where the simulation only oscillates back and forth, viewing the area of interest through a selected angle.  Use the angle pie or angle selector tool to pick the path traversed by the view point and set it for one oscillation back and forth.  This MPEG movie of a 60 degree view path of the Sierra peak is less than 2 MB yet conveys a clear understanding of the physiography of this side of this peak in a way that much larger flybys cannot do.  An 11 MB orbit MPEG movie oscillating through 30 degrees and viewing the top of Mount Whitney in the California Sierra Nevada Mountains (area S.E. of San Francisco) can be downloaded from http://www.micro-images.com/featupd/v59/mpeg/whitswp1.mpg.

Sample Uses.  Let’s think of a typical application where this might be used.  Start with the orbit or oscillating simulation of the Sierra peak played back in PowerPoint to a state or county planning/permit group, an environmental organization, a group of investors, or all of the above.  A series of orbiting simulations could be prepared.  The first might be the one already available.  Next, annotate the draped image to outline the general area proposed and run another simulation with the identical orbital parameters.  Next, edit the image to remove the trees by painting them out and do another orbit.  And so on, even “excavating” in the elevation model to change the topography.  Use PowerPoint to make a slide continuously orbiting the first simulation until its model is understood by the audience.  Then, switch to the next simulation which will come up in exactly the same PowerPoint area with new captions.  And so on, through a series of presentations.  All of you should experiment with this approach, as it is a very effective way to present materials to an audience which is not yet “geospatially” aware or adept.

Orbital viewing is also very useful in visualizing areas of low topography, as after a few rotations, the subtle topography is built up in a mental model and understood.  This is particularly useful when the area to be viewed is close-up and perhaps covered by an image of limited extent such as in many agricultural applications.  If the orbit path is outside the image, a pedestal can be added and many useful topographic clues gained from the rotating edges of the field, quarter section, or section of land.  Also, as in the ski slope design example above, PowerPoint could be used to play back a series of matching simulations of images taken at different times with biomass or other effects inserted.

Why not widely used?  It is interesting to note that while these orbiting simulations have special value in simulating real world geospatial results, potential examples are not found in the samples of other competing simulation products.  Their potential use also seems to be specifically related to presenting results from serious geospatial projects and would be of little interest to game players or other products that attempt to simulate real world flybys.

Using Maps.  Do not forget, TNTmips provides all the tools needed to reduce a topographic map to an elevation model.  When prepared, this map or any other map of the area can be used as the basis for a 3D orbiting map simulation.  Point symbols, annotations, image inserts, and other features can then be added to the map for interesting public presentations.

Small File Sizes.  All of these possible benefits of orbiting, oscillating, and panning (simulations) are provided in quite small MPEG files compared with flyby simulations.  A reasonable single presentation might be from 2 to 10 megabytes.  This smaller size enables a series of modified simulations to be prepared in a reasonable time frame and strung together for a very dramatic presentation, web visual, or PowerPoint electronic report.

Panning.

The smoothing and despeckling options also provide the basis for good panning simulations.  As in orbiting, you can interactively select the position you wish to pan around in a concurrent 2D or plan view.  The pan can also be set at 360 degrees or less and to oscillate once through an angle.  It is best to place the viewer’s position considerably above the point selected and to look down to avoid confusion and the possible dizzying effect of foreground objects passing rapidly across in the front of the view.  A color plate is attached entitled New 3D Simulation Motion Types to illustrate the setup of a panning simulation.

A 7 MB, 360 degree panning MPEG movie with the viewer placed at the top of Mount Whitney in the California Sierra Mountains (area S.E. of San Francisco) can be downloaded from /featupd/v59/mpeg/whitpan2.mpg.

PowerPoint Movies.

Microsoft’s PowerPoint can be used to attractively present your MPEG movies as part of an electronic slide show.  Many MPEG movie viewers are available with a variety of controls and options for use during their playback of your MPEG simulations.  However, often you will want to present or ship an attractive set of electronic slides which may contain at least one slide which plays an annotated 3D simulation produced in TNT.  An annotated PowerPoint slide can be created which shows a window containing the first frame of your simulation.  When you click the mouse on this frame, the linked  MPEG movie will be played in this window.

A color plate is attached entitled 3D Simulation Presentation to illustrate a single PowerPoint “movie” slide.  Since PowerPoint has a free run-time viewer, the stack of electronic slides describing your TNT project with annotated movies, images, maps, and so on can be sent with it to anyone for immediate use on a Mac or PC.  For more information on this topic, see the new Getting Started booklet entitled Sharing Geodata with other Popular Products or your manual for PowerPoint.

Modifications since V6.00 CDs.

Simulation is already a very complex and advanced software activity as evidenced by this year’s popular animated movies.  However, preparing your own low-cost simulations of perspective scenes to present your geospatial projects requires specialized tools.  The first time an attractive simulation of a flight over your project materials is shown and reshown to someone else, they are very interested in it and what you can do.  However, the first thing that they wonder is “where am I?”, in other words, where did we fly or travel, and how high above the terrain or below the water were we?  The insertion of simple profile and/or plan views into your movie may help to orient them as they watch the simulation.

Profile View.  The option for a profile view automatically adds it into the bottom of your MPEG movie.  This provides a static profile of your flight path and the surface and simulations movement along it.  This profile is added into your simulation and not presented in a separate window, as most of the MPEG viewers do not provide an option for showing auxiliary movies.

Plan View.  The plan view option is similar to the profile option.  It inserts into the simulation a graphical plan view of the extent of the surface layer.  The position of the viewer is then moved on this plan view as the simulation plays.

General.

Future Plans.  There are some very good 3D simulation products available outside the geospatial analysis field.  We all experience their almost realistic results in movies, videos, weather news, and TV commercials.  Generally these are very expensive systems requiring the highest graphics arts skills available.  What is growing in the TNT products is a set of low cost simulation tools specifically designed to handle the needs of professional geospatial applications.  These procedures must operate with the kinds of input materials commonly used in your activities, such as georeferenced images and maps before or after analysis (for example, map to DEM reduction, automated map projection reconciliation, ...).  They must have tools that enable easy setup in a geospatial context (for example, pulling out a circle in 2D for an orbiting path).  They must provide for automated production of a larger number of simulations (for example, an SML to automatically produce orbit simulation of each square mile in an agricultural area).  They must discover and provide those kinds of special effects which best suit the presentation of geospatial information (for example, orbiting and panning).  Considerable software effort will be expended in advancing this simulation process for V6.10, as a long list of ideas and improvements is available.  Please submit your suggestions now.  When 3D and n-dimensional geospatial analysis systems are required, these kinds of visualizations in real time will be very important components.

Sample Movies.  Since MicroImages is currently working on improving all simulation features, new or improved movies are being posted almost weekly in the gallery area of microimages.com.  It is not possible to provide these movies in a timely or up-to-date fashion on CDs.  Please keep checking this movie gallery to get the latest new examples of what can be done with the latest version of this simulation process.  Please also monitor this site to see if new and improved versions of movies you have already downloaded have been posted, as they will be upgraded as new features are added.  Please also consider providing your simulations for posting here.

Time Comparisons.  A comparison was made between the popular and low priced Bryce 3D product and TNTmips.  The same input images and terrain were used and the same settings, including foreground smoothing.  The 10 second MPEG movies produced were of very similar quality.  Bryce required 2 hours and 10 minutes, and TNTmips required 1 hour and 37 minutes to generate the movies.  From this and other comparative tests, we conclude that for the same inputs and outputs, TNTmips’ rendering of simulations is as fast, if not faster, than other popular products designed and sold for the specific purpose of simulation.  TNTmips does have the current advantage of being entirely raster based in its rendering.  These other products may do part of their rendering as polygons, as this is required for artificially built renderings.

Formats.  Currently, TNTmips renders movies only to MPEG format, which is compressed.  Other formats such as AVI, MOV, and Quicktime are also commonly used.  For example, AVI is not compressed and provides larger files but can render smoother results in frame continuity and internal content.  There are various converters available, some free on the Internet, to convert from MPEG into these other formats.  Also, there is a growing need to convert these simulations into NTSC, PAL, SECAM, or other video formats.  These require auxiliary hardware that can range from a few hundred dollars to thousands, depending on the video editing capabilities included.  MicroImages has some knowledge of these devices and can refer you to other clients who are doing this and may be willing to give you a little help.

* Geotoolbox.

A color plate is attached entitled Toolbox Measurements and Regions to illustrate the results of each of these procedures.

Completely Redesigned.

The select, measure, sketch, and region generation tools (part of the select tool) have been integrated into a single Geotoolbox.  It is designed so all controls are presented on a single dialog that uses tabbed panels to switch between related settings.  The integration of these tools now allows simultaneous measurement, sketching, selection, and region-creation using a single graphical element.  For example, the operator can sketch a region around an area shown in a reference layer, view histograms of that area in the raster layers, and generate a permanent region defining the boundary, all without having to redraw the area or leave the Geotoolbox.

Sketching In Layouts.

When sketching, the active group and active layer now determine which object will contain sketch elements added by the user.  If the active group does not support the sketch layer, then the user must select a different group before sketch elements can be created.  If the active layer is not a sketch layer, then the topmost “sketch” layer in the active group will be used.  If there is no sketch layer in the group, one will be automatically created.  The sketch tool in V5.90 attempted to use the closest suitable group to the graphic drawn by the user for adding the sketch element.  In complex layouts, this was found to cause confusion, and it was sometimes difficult to get the sketch elements to go into the desired group.  The new method in V6.00 allows precise control over where the sketch elements are saved.

The sketch tool now supports all of the point, line, polygon, and text style options available elsewhere in other TNT processes.

Cross Sections.

A new “Cross-Section” generator is now incorporated into the Geotoolbox for use in display and the object editor.  This tool generates a cross-section using a surface layer such as elevation and a vector polygonal object and creates a new vector object containing the cross section.  It uses the current line drawn on the vector and intersects it with the polygons, creating a line that is split into multiple segments.  The profile line is generated based on the distance along the line for the x coordinate, and the elevation from the surface layer is used as the y coordinate.  A baseline is generated and can be set when selecting the vector object to save the cross section in.  The baseline’s length is the same length as the profile line.  At the start and end of the profile line, and at each point where it intersects a vector polygon edge, a line is dropped from the profile line to the baseline, forming a polygon.  The attributes from the vector polygons that were intersected are transferred to the corresponding polygons in the cross section.

Regions.

A region can now be automatically traced around a solid area or from a boundary-trace in a raster layer given one or more points in the region.  This is available via the Geotoolbox “point” graphic tool on the right mouse button.

The histogram viewed in the flood zone region generation tool has been enhanced.  The start and end points and the intervals between can be set for the flood zone.  The dam’s crest can now be specified as either a height above the terrain at that point or as an elevation above sea level.

* GPS inputs.

Introduction.

The GPS capabilities in all TNT products have continued to rapidly expand.  The new sample Data Logging APPLIDAT installed by V6.00 and described below illustrates the kind of sophisticated but simple GPS use that is now supported in SML and elsewhere in all the TNT products.  Even the free TNTatlas now provides GPS support.  In TNTview, GPS support and the new sketching and measuring tools provide the basis for field data collection while exploiting advanced multilayer display capabilities.  The many new GPS features introduced in V6.00 have temporarily exhausted the list of features that you and MicroImages outlined.

A GPS device is a “real-time” source of position coordinates when it is connected to the computer’s serial port.  Please note that when using inexpensive GPS equipment, the software external to that device, such as the TNT products, has very limited control of how and when the device reports positions.

As expected, each manufacturer of GPS equipment seems to have their own idea of the protocol and contents of the data stream sent out by their devices.  Each of these has its advantages from that manufacturer’s viewpoint in the expected applications of their equipment.  Furthermore, some manufacturers, such as Garmin, want to control all the applications of their special features and therefore license out, charge for, and/or otherwise control access to their protocol and its format.  In the longer run, this will not work for general purpose units, but these manufacturers will need to learn this the hard way in the market place.

Fortunately, there is also a standard protocol for how a GPS device reports information, and it is supported by most general purpose GPS equipment regardless of cost.  MicroImages supports this common NMEA 0183 (National Marine Electronics Agency) standard and the “Trimble ASCII” protocol.

A color plate is attached entitled GPS Support in Geospatial Display to illustrate GPS capabilities.

GPS Log Files.

TNT products now support a prerecorded source of coordinates called the GPS Log File containing GPS positions.  These files can be created while reading from a GPS device, as well as by editing or other manual methods.  Log files can also be used as virtual GPS devices to simulate GPS input when a real GPS device is not available.

GPS log files store coordinate positions and associated information in a simple comma-separated-value text file.  It is thus a simple matter for you to create virtual GPS log files using a text editor, spreadsheets such as Excel, or your own programs.  MicroImages will support specific manufacturer’s log file formats if you can supply their documentation.

Multiple GPS Sources.

An active GPS source is any directly read GPS device or GPS log file from which positional information is being requested.  Any or all active GPS devices (real or logs) can be selected to display cursors in a view, accessed via an SML script, used in graphical editing, and so on.  The TNT products now support active concurrent input from multiple GPS sources which can be a mixture of GPS devices and GPS log files.  GPS devices cannot be active in two concurrent processes due to operating system limitations.

A simple use of several active sources would be to attach and access three GPS devices, all of which are being concurrently displayed as cursors.  One of the active sources can be designated to control the view, its scrolling, and related changes.  An alternative might be to continually rescale the view window to maintain all cursors in the view regardless of their geo-separation.  An example of a more complex application of multiple GPS sources would be where several vehicles are sending in their positional information by cellular phones, the position of the vehicle containing TNTview is also moving and producing a GPS input, and the driver is attempting to follow the route across the field created in advance or by a GPS route log created by a previous vehicle (robots anyone?).

Tracking Symbols.

Any TrueType character or point symbols provided by or created in TNTmips can be displayed to show a GPS position.  Selecting different symbols for each GPS input source can be used to keep track of multiple inputs.  A separate symbol can be assigned to represent a GPS input that is moving and another to distinguish the same input when it is stationary.  For example, you may want a symbol that looks like a truck to be red when the point is stationary (in other words, it is parked).  However, when it is moving, it is green.

MicroImages has already found that most of the questions like “Where is my GPS symbol?  I know it’s working!” result from a position output that is not actually in the current view.  This might be due to zooming, panning, a wrong georeference, a wrong projection, and many other causes that can even put you half way around the world.  A unique  “where are you” feature has been provided to identify this situation.  If your GPS is on, selected, and outputting coordinates correctly, and these coordinates are not in your current view, an arrow gadget will appear at the edge of the view, pointing it to the position of the GPS coordinates.  This gadget shows no matter how far you are off the edge the current view.  If the GPS position is moving, then the arrow gadget will move around the edge of the view or remain stationary if the position is not changing.  The color of this gadget matches the current color of the GPS symbol, so its color will also change to indicate that the “off-screen” position is moving or stationary.  If your GPS is moving and the mouse cursor is not, the GPS coordinates of this remote position will show in the lower right corner of the view.  The type of coordinates showing (mouse, GPS, ...) is indicated by the icon showing just to the left of these coordinates.

Setup and Access.

A GPS device must be set up and configured before it can be used.  A dialog allowing a new GPS device (real or logged) to be configured is available from anywhere a GPS source can be selected.  There is no limit to the number of GPS devices that may be set up on a single system.  As noted, an option on the GPS menu is also available to set up a device.

A GPS menu item and icon appear on all views that support GPS position setup and performance.  This menu currently provides a dialog box with options to:

  • select which GPS source(s) to display positions for

  • set up a new GPS device (in other words, activate it)

  • open a GPS log file

  • toggle auto-scrolling on/off

  • select the units for reporting GPS locations, speed, and so on

Use this dialog box to set up and configure each new GPS device selected for use in a view, accessed in SML, used in graphical editing, and other locations.  Multiple GPS devices (mixed real and virtual) can be selected.  Cursors will be shown for each device.  You can designate which GPS input device controls the scrolling.

Automatic GPS Connection.

Starting a TNT process like Display, which accepts a GPS input, will make a single attempt to connect to the default GPS devices, if any.  If the connection is successful, the GPS location will be automatically displayed in all 2D group and layout views if the reported position is within the extents of the object(s) viewed.  You will not need to press a “GPS” button to turn on GPS position reporting unless multiple devices and/or log playback is being used.

GPS Status and Control Dialog.

A status and control dialog can be exposed for each active GPS source.  Status information displayed for each input is position, speed, heading, accuracy, number of satellites, and so on.  Not all this information may be available for specific equipment or log files, as they may not contain sufficient information to compute it.

If the source is a GPS log file, this dialog will provide the ability to rewind the log, close the log, and so on.  It also allows the selection of the symbols to represent your GPS position in the view.  Separate symbols and their orientation characteristics can be chosen to represent stationary GPS coordinates and moving GPS coordinates.

As mentioned earlier, if the source is a real GPS unit, it acts as a dumb device, as little control of it is supplied by the NMEA protocol (in other words, TNT products cannot send control information to the device).  Thus, this dialog can provide only limited control options such as the ability to close it as an active source or send its output to a log file.

Buying a GPS.

Perhaps you are planning to purchase an inexpensive GPS unit ($200 or less) for experimenting in the TNT products.  Please make sure it has the following features.  It will connect NMEA output to your computer via a serial cable.  It is very important that it has at least simple programming whereby you can manually enter any coordinates that are then sent out via the serial cable.  This will create test positions when indoors where no satellites can be seen.  It will also enable you to demonstrate or check how a procedure works on a view of some distant location where you are not occupying a real position within that view.  If you have trouble locally acquiring a suitable unit for $200, contact MicroImages and one can be exported to you.

Object Selection Dialog.

* Selecting Objects.

The Object Selection dialog now presents a simpler appearance by using icon buttons instead of text buttons for specifying optional actions on the object(s) selected.  The following actions are still controlled by text buttons:  “OK”, “Skip”, “Cancel”, and “Help”.

An “Info” icon button has been added to display the same information about an object as in the Project File Maintenance dialog.  This should prove to be very helpful in identifying your objects and checking their characteristics before using them.

A “Refresh” icon button has been added to force a re-read of the list of files.  This is useful if you have changed the media, such as inserting a different CD-ROM.

* Selecting All Objects.

There is now a very useful “Add All” icon button available in the multi-object selection mode.  This powerful “Add All” option has different behavior depending on whether a list of Project Files or objects is being shown.  Within a single Project File, “Add All” will add ALL of the usable objects in that file to the list of selected objects.  An example use of this would be to select all the 200+ spectral bands in a hyperspectral image Project File where each individual spectral band is, or appears to be, a separate object.

Outside of a Project File, the “Add All” option will add ALL of the usable objects in ALL of the files in the current directory to the list of selected objects.  This could be a very large number of objects for use in mosaic or other processes.  For example, put all the Project Files containing orthophotos of a county into a single directory.  Then use Add All to select them all for immediate tiling into the display.

When you are in the multi-object selection mode, there is also a “Remove All” icon button.  The “Remove All” button will clear the list of all selected objects.  Sometimes when you have selected hundreds of objects, it will be easier to start over.

Future.  The next scheduled changes for object selection will be to improve how you navigate from the drive and directory level to the object level in the Project File (maybe for V6.10).

* Hatch Pattern Editor (new prototype feature).

The ability to create and use hatch, or stroke, fill patterns has been added to the TNT products.  A hatch pattern consists of a number of elementary components that have separate angle, spacing, offset, and thickness.  All these components together can constitute a very complex hatch pattern.  A color plate is attached entitled Hatch Patterns for Polygon Filling to illustrate the results of each of these procedures.

A single hatch pattern can consist of multiple components which can be hidden, raised, lowered, and so on.  There are 2 possible elementary components in a hatch pattern:  simple and line pattern.  A “simple” component is just a solid line that, combined with hatching parameters, is capable of producing simple polygon hatching.  You can use the new hatch pattern editor to set up its angle, spacing, offsets, and thickness.  The editor will also allow you to build up a hatch pattern of multiple overlays of different kinds of simple lines.  These patterns can be saved, used, and edited at a later date.

A line pattern component in the hatch pattern uses a standard TNTmips line pattern.  These are the same line patterns you already use to represent roads and other linear features in TNTmips.  Using the hatch pattern editor, they can be selected and inserted as components in the hatch fill pattern.  Line components can be complex in themselves and can be used to produce very complicated hatch designs.

Hatch pattern geometry can be referenced to a common coordinate system.  For example, a series of different color diagonal patterns of uniform spacing can be designed.  When these are applied in a map to fill various different polygons, the patterns will align end-to-end when they meet at common boundaries.

Assign variable color to hatch pattern elements when you are creating a pattern that will have multiple different uses.  This variable color component can then be set by the attributes of the polygon that it will fill.  For example, a property of a soil polygon could control the color of lines in the hatch pattern used to fill it.

Styles.

After many requests, point, line, bitmap-fill, and hatch fill styles can be copied between subobjects.  There is a “copy” button in the “Style Editor” dialog that provides this functionality.

Contrast.

Multiple rasters can now be selected for a contrast transformation.

Import/Export.

Introduction.

Finishing Projects.  MicroImages, via the TNT products, strives to provide you more and better import/export capabilities than any competing image processing and GIS software vendor.  Competing products provide adequate functionality in their specialty area but do not cover all geodata types, since their more specialized products cannot use them.  Where these competing products have proprietary internal formats (for example, ESRI and MapInfo), these have also been conquered and added for import, export, and direct use.

Many of you are now using the TNT products in concert with other widely used writing, publishing, and database products (EXCEL, Illustrator, Reader, Oracle, ...).  Recently, exports to other widely used commercial publishing products (ADOBE Illustrator and Reader) have been added to the TNT products to assist you in completing your final, polished projects.  These kinds of exports are not generally available in any other competing broad based commercial image processing and GIS products.  A new Getting Started tutorial booklet entitled Sharing Geodata with other Popular Products is provided to assist you in the symbiotic use of your geospatial analysis with common desktop products.

Lots of Formats.  There are thousands of geodata product formats in use around the world.  It seems that everyone in each nation who creates a geodata set starts out by creating a “new, convenient, and preferred format”.  In addition, geospatial analysis is still evolving, so existing formats are continually changing.

For example, there is a long list of U.S. Department of Defense (also adopted by Australia, Canada, Great Britain, ...), NIMA, and NATO based standard formats for geodata, of which only a few have been added to the TNT products (for example, ADRG and DTED).  Each format is documented with at least several hundred pages!  Some of these, such as the NITF 2.1 (NIMA format) are complicated by the large amount of military oriented metadata they contain.  There is no problem in obtaining these military formats.  The problem in adding many of these formats is that sample datasets are not available for testing even though this geodata is not classified as its use is restricted by governmental and/or military bureaucracy.

New import/export formats are being requested from MicroImages every month.  If the formats are judged to be of world-wide general interest, they will be addressed as time allows.  If they are local to a nation or of restricted use, a quote will be provided for their addition to the TNT products as an overtime job.

Create Your Own.  The powerful import and export functionality in the TNT products is currently being modified to make it modular and function oriented.  This will provide an improved environment in which you can create your own import and export functions to handle local situations.  SML in V6.10 will provide functions that you can use to import or export the formats already supported or create your own.  With these you can write scripts to modify, transform, set up, and/or import large collections of these standard formats.  Sample scripts will be provided to illustrate this approach.  Furthermore, you can write new SML scripts and APPLIDATs that use, analyze, reformat, interpret, ... the geodata of others (for example, Shapefiles, coverage, TAB, ...).

Those who have complex local formats and are familiar with the use of TNTsdk will be able to more easily create their own C++ based functions.  These can be added to the menu or to the SML function list.  To assist you in using TNTsdk for this purpose, the source code will be provided for several TNT import and export functions for formats that are already publicly documented.

Autonaming Export.

Exporting to multiple files now provides an option to “auto-name” the output files.

* PDF Export (new prototype feature).

Map Layouts can now be exported to ADOBE PDF format to create electronic versions of your maps for distribution on CD or via the Internet.  This feature is provided in the print process as it operates similarly to creating a TIFF, EPS, or print file.  As most of you have already experienced, ADOBE’s Acrobat Reader and PDF format are becoming the ubiquitous means to distribute color documents of all kinds.  However, you may not be aware that it is also a means to distribute high quality electronic maps at any scale.  The contents or components of these maps are also protected to some extent, as it is not possible to extract individual components (rasters, vectors, CAD layers, ...) from maps distributed in this PDF format.

U.S. federal agencies are distributing their electronic maps in PDF format such as at http://www.geohazards.cr.usgs.gov/eq/ and http://www.epa.gov/surf2/maplibrary.  MicroImages has also posted sample PDF maps and posters exported from TNT layouts at http://www/microimages.com/promo/sample.htm.  In preparing a map or poster for distribution via PDF format, please remember one fundamental guideline:  if you expect your map user to view or print out a large detailed map with their Reader, it will be a large file even if compressed.  You cannot prepare a layout in a small 8.5 by 11" format, convert to PDF, and then expect your clients to view or print a large, detailed version of this product.  If you want them to view or print large detailed maps, then your layouts must be prepared and laid out with detailed information (for example, large high resolution rasters) and this goal in mind.

Modifications to GGR.

The GGR (MicroImages’ public domain Generic Georeference Raster) format now supports line- and pixel-interleaved data.  These are convenient formats for use if you are designing or assembling a hyperspectral imaging device, as they are public, documented, directly used by the TNT products, and convenient for the storage of hyperspectral imagery.  You might also use these formats as intermediates if you are trying to get some strange hyperspectral imagery into a usable format for the TNT products.

* MapInfo TAB Import (new prototype feature).

The MapInfo internal format commonly referred to as TAB files can now be imported.  This import includes the graphical, attribute, raster, and georeference information in these files.  This allows TNT products users to directly access MapInfo geodata, and no conversion to their transfer format MIF/MID is now required.  An export to this same format is underway, and when completed will also allow native MapInfo geodata to be directly edited.

ESRI ASCII Raster Import.

Georeferenced rasters in this ESRI ASCII format can be imported.

E00 Export Modifications.

Text labels are now exported as annotations into the Arc/Info E00 format.

ArcBIL/BIP Export.

ESRI’s ArcBIL/BIP raster format can be exported.

JPEG Import Modifications.

Rasters imported from JPEG can now also import and use the georeference information stored in the GeoTIFF tags.

TIFF Import Modifications.

Rasters imported from TIFF can now also import and use the georeference information stored in the auxiliary georeference files of the ArcInfo World and MapInfo TAB formats.

GeoTIFF Export Modifications.

Georeference subobjects associated with raster objects in RVC Project Files can now be exported to GeoTIFF tags to create georeferenced TIFF images.

LANDSAT TM-Fast Import Modifications.

Unfortunately, the TM-Fast format does not identify non-imaged cells in the raster in any way and allows them to be “0” even though “0” is also a valid data value.  Large areas of boundary cells of “0” value which do not represent ground cells can distort the results in various analysis processes (for example, unsupervised image classification).  To overcome this, importing from the TM-Fast format has been modified so that the large triangular corner areas of cells outside the actual LANDSAT parallelogram image area recorded as “0” values can optionally be set to any null value (use “255”) while “0” data values within the images remain “0”.

If this new option is selected, these null cells are detected as a long string of “0” cells present in all spectral bands being imported and occurring only at the beginning and ending of each scan line.  This condition can occur only outside the image area in these triangular corner areas.  However, real “0” value image cells along the edges of the images may occasionally be imported as null cells.  Performing this test during the import of a single spectral band does not slow the import noticeably.  However, performing this test across several bands being imported at one time significantly increases the time needed for the import.

Text to Vector Import.

Text files containing pairs of points can now be imported to create line segments as in a vector object.  This procedure was recently used to import topographical break lines (for example, drainages, stream embankments) from an analytical stereo plotter.

ASCII Import Modifications.

Save Format.  You can now save a format you design for importing text files.  Suppose you have several text files containing X and Y field numbers, delimiter, projection, orientation, and additional attributes that you need to import as separate vectors.  All of the files are exactly alike in format, but you get them at various times.  You can now define the format information for the first text file you import and save the format settings to reuse the next time you need to import a text file with exactly the same format.  A good example of this type of data is information collected by a mechanical device that is always the same, such as a yield monitor from a combine or a gravimetric sensor.

Make 3D Lines. Use ASCII text files containing ordered XYZ vertices as 3D line segments to create 3D vertices in the order that the points were collected.  Various GPS controlled, analytical stereo plotters, and other measurement devices directly record ASCII strings of coordinates that can now be converted into 3D vectors for use in geophysical line leveling, to form breaklines for surface fitting, and so on.

For example, you might start with the ASCII output of airborne gravimetric sensors and soil conductivity probes.  During import, use this ASCII import feature to connect the sample points as vertices on a 3D line.  This method of data collection usually produces lines that are dense in the direction of collection but widely spaced between swaths or data collection paths.  If cross calibration lines are also collected, also import them in this fashion and use the line leveling feature in surface modeling to adjust (calibrate) the net of 3D lines before computing a surface from them or using the 3D vectors directly in some other process such as plotting into a perspective view.

When using analytic or software stereoplotters, you may collect special XYZ points defining lines which make up boundaries (for example, coastlines) or elevation breaklines (for example, drainage).  Import these from their original ASCII form as 3D lines for use as hulls and breaklines in the surface fitting process.

Database to Vector Import.

Database records can be imported to create 3D points in a vector object from an RVC database object, dBASE III and IV, PCInfo, and R:Base.  Records containing the fields designated as containing X and Y are not imported if both contain zero.  Use this procedure to import points to define a surface that are collected by some other sensor (airborne laser ranging) or measurement device (analytical stereo plotter) directly into a database format.

You may wish to use a query to filter, screen, or extract selected points for use in your vector object.  Remember that you can apply a query when importing into an RVC database object from tables in a supported format or via an ODBC connection to Access, Oracle, and others.  Then use this internal RVC table to further filter the records and subsequently create the points in your 3D vector object.

IDRISI Import.

IDRISI rasters can be imported (the header record is extension *.DOC and the raster is *.IMG).  Vectors and databases are not yet imported, as they are not as commonly used in IDRISI and have not been requested.

National Transportation Atlas Import.

The U.S. Department of Transportation (DOT) distributes without charge a National Transportation Atlas (NTA) on CD for 1997.  The NTA is a collection of geospatial data developed by the DOT and other NAFTA nations’ government agencies depicting transportation facilities, networks, and service of significance for the NAFTA nations.  This geodata is designed to be used with geospatial systems to locate transportation features and to provide a framework for transportation network analysis. The CD of the NTA geodata base can obtained at http://www.bts.gov/gis/ntatlas/ntad.html.

The geodata in the NTA are generally of a geographical accuracy consistent with that of a 1:100,000 scale map and are provided at various scales.  The NTA contains line data for the highway, railway, waterway, transit, and commercial air networks.  Point geodata cover airports, runways, rail stations, port facilities, and similar features.  Polygon features cover place names (over 2500 population), state and county boundaries, urban areas, congressional districts, economic regions, National Parks, military bases, and so on.  All of these elements have associated attributes.

You can now import the geodata in the NTA.  Continuing the archaic and historical example of Arc/Info, the NTA stores each individual type of geodata in many files in a specific name in a separate directory with standard extensions.  For example, the U.S. railroad network at each scale can be imported from directories containing the RAIL... files with the extensions of *.DBF, *.GEO, *.LNK, *.MET, *.NOD, *.TL1, *.TL2, and *.TL3.  During import, the TNT products assemble these various components into appropriate project files.

ENVI Import Modifications.

Hyperspectral imagery stored in ENVI format can now be imported.  This includes imagery acquired and distributed by the HYMAP (Australia) and DAIS (Europe) imagers.

SDTS DEM Import.

USGS DEMs in SDTS format can be imported.  This is the format in which these elevation models are now stored on the Internet.

SDTS Attribute Export.

The attributes in a vector object can be exported to SDTS.

Planned New Import/Export Formats.

MapInfo TAB Export.

A vector object and its attributes will be exportable to the MapInfo native format (alias TAB format) for immediate and direct use.  MicroImages is not using the file access libraries provided to MapInfo partners, but directly creates this format within the TNT code.

AISA Import.

Import of the Finnish Airborne Imaging Spectrometer (AISA) HS imagery is underway.  (see Hyperspectral section for description of device).

Spatial Data Framework (SDF) Import —Japan.

The preparation of the import for the Digital Map 2,500 (Spatial Data Framework) is underway.  This geodata is at a scale of 1/2,500 and covers Tokyo and environs (18 CDs), Osaka (12 CDs), Nagoya, and is expanding to other cities.  This SDF data structure contains polygons, arcs, points, nodes, attributes, connectivity, and buildings in raster form.

AIRSAR.

The Jet Propulsion Laboratory has and distributes considerable aircraft and spacecraft SAR imagery in a compressed Stokes Matrix format.  MicroImages will be providing an import format for this imagery where each cell is decoded and stored in a raster object as a double precision floating point number.

Database Constraints (new, partially available feature).

The Concept.

MicroImages has expended considerable effort to add controls for adding constraints to data fields for use in operations in which you enter data into fields.  Constraints assigned when the record is defined can force each field to be multiple choice, accept only numbers, variants of day/month/year, confined to a range of numbers, yes/no only, check a table of string entries, and so on.  These procedures will enable you, during the setup of a new table, to define the entries to accept for each field.  Later, when these tables are used, these constraints will be imposed upon you when any data is entered.  For example, a field can be defined as multiple choice, such as state names only, and this list of choices can be attached to that field when it is defined.  Subsequently, when this field appears in a single record or tabular view, it will present only these choices in a drop down menu when you attempt to complete or edit that field.

Your field constraints are actually stored in other relational database hidden tables linked to the fields in each of your tables.  For example, if your field is a multiple choice list such as county names, these strings are kept in a string field in another linked constraints table.  As a result, you could present a lot of complex reference information when your field is being completed.  For example, a list of registered voters or customers could be linked and used as a constraint in completing a field.

First Use.

The data structures to define, contain the choices and reference fields, and use constraints are in V6.00.  Their use can be seen immediately in the 11 data entry dialog boxes in the new Data Logger APPLIDAT.  The 15 to 20 fields set up for use in each of these tables were defined originally in TNTmips to be multiple choice, yes/no, date types, and so on.  The Data Logger simply uses these predefined tables with their associated field constraint definitions.  When this SML script requests a specific data entry dialog, the constraints associated with that table control the fields the dialog will show and the acceptable entries for each field.

Status.

Unfortunately, the user interface modifications required to use these new changes throughout the TNT products were not completed for V6.00.  As a result, these features are not ready for general use, and efforts to perfect them are continuing.  However, constraints for fields can be set up or edited now using a “Constraints” button on the “Edit Table Definition” dialog.

Constraints will allow you to specify such things as:

  • multiple choice with an associated list of choices such as yes/no, numbers, state names, and so on

  • mandatory or optional field entry when that record is completed

  • default value or choice for that field

  • valid range of values for numeric fields

  • restrictions on case (all upper case, all lower case, capitalize first letter, ...)

  • if a field is a key field, you can specify that the value entered must exist in the primary key table

Controlling Links.

Constraints can also be used to control the special fields used to link tables.  For key fields, you can specify how the field should be presented in a single record view, such as an option menu listing only the valid values from the primary key table or a text field with a button to the right, which will pop up a scrolled list of valid values derived from the primary key table.  When you make entries into these special fields, these are the options you can specify for any attempt to enter an invalid value:

  • pop up an error dialog

  • substitute the closest match

  • create a new record in the primary key table and add a pop-up dialog to fill in the other fields in that record

  • create a new record in the primary key table and just fill in the default values in the other fields

* Object Editor.

Profile Editor.

A new Profile Edit window has been added.  Select a line in a 3D vector object (for example, drainage or a geophysical survey line), and it can be shown and edited in profile.  In this edit window, the Z values of vertices in the line can be changed, and the line can be splined.

Drawing Tools.

The Line/Polygon graphical editor now has a button on the panel to optionally turn off the start  (square) and end (circle) markers of the line.  Use this option if these graphical devices obscure some position.  They will default to on.

Two enhanced drawing methods have been added when the line editor is in the “Stretch” drawing mode.  Press the “shift” key while drawing and the line will be generated only in right angles to the last segment of the line.  Press the “Ctrl” key while drawing and the line will be generated only in the horizontal and vertical direction.

Vector Filtering.

A filter has been added to the list which removes islands based on the area of the island.  There is also an option to remove all islands.

TIN Nodes.

Operations that add nodes to a TIN can now optionally get the Z values for the nodes from a reference surface layer.

* Label Positions  (new prototype feature).

V5.90 introduced a special graphical tool to allow you to quickly and interactively assign Z values to contours in a vector object.  A new tool that works in a similar interactive fashion is now available to help you insert labels into contour and other maps.  A color plate entitled Setting Line Labels Interactively is attached to illustrate this procedure.

Which Lines?  First a query is written to determine which lines should be selected for labeling.  The query is also used to specify how the Z value should be formed (integer only), where it will occur relative to the line (on or offset), the font type, the font size, and so on.  For example, insert the Z field as an integer for every 5th contour interval at only even 100 increment contour values.  Insert this value centered in the line and oriented along it using a 14 point, Helvetica font in blue, and so on.  Your query can be tested until it produces the desired labels using the insertion tool described below.  This query can also be created by Ptolemy’s wizard windows.

Where in the Lines?  Once a query is available, a line can be drawn in the display across those vector lines into which the attribute, such as a Z value or road name (or a computed field), is to be inserted as a label.  Draw the line using your judgment (your wizardry) so as to cross the line elements at positions suitable for attractive label placement.  For example, contour line values are often inserted in an area where the contours are widely separated at slopes oriented generally toward or away from the viewer of the display or printed map.  Roads are often labeled in vertical or horizontal segments in areas of little other map clutter.  The query is immediately evaluated for all line element intersections and the labels immediately shown.  If unsuitable in appearance or position, simply change the position of the insertion line.  Otherwise, save this string of labels and move on across the map.

Wizards.  Once this tool is set up to work the way you want it, a complicated map can be labeled with attractive labels in a short period.  However, it was found that creating a suitable query was complicated.  Also, several slightly different queries are needed to make a typical map using several types of fonts, styles, and sizes.  To assist you in this, Ptolemy wizard windows can be used in setting up queries for commonly used labeling schemes for contour maps, road maps, and so on.  When you have finished answering Ptolemy’s questions, a query will be automatically generated for your immediate use.  If these standard queries do not quite do what you want, save Ptolemy’s query, and modify it to suit your more sophisticated requirements.

By now, you already know that queries used throughout geospatial analysis can get complicated.  It is therefore obvious to you and MicroImages that this kind of Ptolemy wizardry should be introduced into other suitable areas of the TNT products.  For example, commonly constructed theme maps could be set up in this fashion, standard element selection designed for standard vector geodata such as TIGER, and so on.  Certainly, other TNT procedures need and will gradually get wizards.

Modifications since V6.00 CDs.

Opening Lines for Labels.  A key feature which did not make it onto the V6.00 CDs is the option to break open windows in lines for labels when they are rendered.  This is not trivial, as many lines in just one layer can cross through the outline box inscribing each label.  A suitable scheme has been designed, but its incorporation at the end of the V6.00 development cycle was judged as too risky.  It should be available for your testing by the time you begin using the new V6.00 label insertion scheme.

COGO.

The COGO process has been significantly redesigned in order to support alphanumeric point identifiers.  The point editor design has been significantly improved.  COGO can now import your point data from ASCII text files.

Vector Filtering.

A “Filter Report” button has been added to the Filter Control window to report the statistics of the tested filter.  Use this information to determine how a filter setup is working before applying and saving its results.

A “Remove Islands” filter has been added to the list of filters.  This filter allows removal of islands based on the area of the island.  There is a separate toggle to remove all islands.

* Polygon Fitting.

Two new polygon fitting or “home range” techniques have been added:  the Fixed Kernel Method and Adaptive Kernel Method.  The Adaptive Kernel method produced very good results in test applications.  More information on this method can be found in:

B.J. Worton.  Kernel Methods for Estimating the Utilization Distribution in Home-Range Studies.  1989.  Ecology 70(1) 164-168.  Abridged Abstract:  This paper presents kernel methods for the nonparametric estimation of the utilization distribution from a random sample of locational observations made on an animal in its home range are described.  They are of flexible form, thus can be used where simple parametric models are found to be inappropriate or difficult to specify.  Various choices for the smoothing parameter used in kernel methods are discussed.  Since kernel methods give alternate approaches to the Anderson (1982) Fourier transform method.

Network Analysis.

A preferences dialog box is now available to permit the selection of color and symbols for the various network interface components.  It also provides the option for all networks to be drawn in the styles used elsewhere in the TNT products.  Each line in the network can now be assigned and referred to by a name (for example, street name) in addition to a number.  Each node can now take on a complex name dependent upon the names of the lines connecting to it.  A color plate is attached entitled New Features in Network Analysis to illustrate the results of each of these procedures.

All the capabilities of the network analysis process were adjusted so that their functionality could be made available for use in SML scripts (93 functions).  A typical SML script already developed by one international client recomputes the traffic routes to be used if a specific bridge is destroyed.

CAD Merge.

The CAD Merge process has been reintroduced and upgraded to use the same database joining options as the Vector Merge process.  The process can be accessed from “Process/CAD/Merge...”

Automatic Classification.

You can now view a histogram of cells versus distance from class center for each classification process.  You can threshold out the tail of class pixels manually in the cell/distance histogram.

Raster Operations.

Linear combinations of rasters were previously limited to 24 raster objects.  Now that the TNT products are working with hyperspectral images, this limit has been exceeded.  As a result, the process has been modified to increase this limit to a much larger number.

Mosaic.

Mosaic now has two new options:  “Apply Contrast Tables” and “Apply Colormaps”.  These allow you to turn off contrast tables or colormaps that are being used for displaying the images, but not desired for mosaicking.

* SML.

Introduction.

Database Entry.  Complex dialog boxes can now be used in scripts to create and edit database records.  The layout of the dialog box, entry procedures, and data entry constraints can be designed and used in TNTmips.  However, once they are created, they can simply be selected for use in your script.  The use of this powerful feature is demonstrated in the Data Logger APPLIDAT.  See its script to see how simple it is to use each data collection dialog box to collect and edit database records in your scripts.

GPS.  The new capabilities of the TNT products to use GPS information were also moved into SML as demonstrated by the Data Logger APPLIDAT.  This required the addition of several new GPS functions.  The existing GPS functions have been extended to allow multiple GPS sources.  The GPS class now has a callback so that your script can be notified when the GPS moves.  The additional capability for use of GPS inputs demonstrated by the sample Data Logger APPLIDAT was provided by the addition of new display and related functions.

Automatic Classification.  The multispectral image classification functions in TNTmips are now all included as functions in SML.  You could use them to construct an APPLIDAT that automatically analyzed and displayed for whatever multispectral image was combined with the APPLIDAT.  Another interesting application would be to construct an APPLIDAT that first used GPS positions to collect point training information in the field into database records (like the Data Logger).  As each new point was entered, a new, revised classification could be completed as you walked to the next training point.

Layouts.  Layouts can now be created or altered with SML scripts.  An immediate application of these functions is for a script which will query you for alterations in an existing layout prepared in a TNT product.  For example, a layout you use over and over for the production of standard image maps could adjusted by a simple set of questions produced by your script (for example, to change the name and location of image, map name, and so on).   A sample script to illustrate this application is being prepared.

Views.  You are now provided with the control to turn off any of the standard icons across the top of a 2D or 3D view.  This capability was requested by several script writers who wish to simplify the appearance of their views.

HelpTips.  All the text for your HelpTips can now be stored in a separate text subobject under the SML script.  The delay “to show” time and the color scheme can be specified in this simple text file for each HelpTip.  This makes changing the contents, tuning the time, and altering the appearance of any HelpTip as simple as editing this file.   As a result, HelpTips are now much easier to create, maintain, change, and translate.

Each HelpTip can now be assigned a separate color for its text, background, and border to alert the APPLIDAT user of special circumstances or that some sort of change has taken place.  The Data Logger APPLIDAT uses different color backgrounds for HelpTips that might appear to be the same as previously read, but are different in some subtle way.

Apple Macs.  Icons to represent APPLIDATs now appear on the Mac desktop just as they did previously on Windows based systems.  Both the current APPLIDATs (for biomass assessment and data logging) are now automatically installed on the Mac.  Selecting either SML icon will start up everything needed to start up the APPLIDAT.

New Functions.

The continued rapid expansion of this geospatial programming language (SML) continues with the addition of 184 new functions in V6.00, bringing the total number of functions to 779.  The largest single group of functions introduced are all concerned with tracing and analyzing networks.

Display Functions. (29)

CADLayerGetObject
Set a CAD variable to point to the CAD object from a CADLayer.

CADLayerSetObject
Change the object used by a CAD layer.

DispSetMinMaxIndexFromGroup
Set Raster cells to layer index of Raster with largest/smallest cell value in a group.

GroupAttachHorizontal
Set horizontal position of display group in layout.

GroupAttachVertical
Set vertical position of display group in layout.

GroupQuickAddRegionVar
Quick add a region layer given a region variable.

GroupRead
Read a saved display group from a file.

GroupSetActiveLayer
Set the active layer for a group.

GroupWrite
Save a display group to a file.

LayoutCreate
Create a display or hardcopy layout.

LayoutDestroy
Destroy a layout and all the groups in it.

LayoutRead
Read a saved display layout from a file.

LayoutWrite
Save a display layout to a file.

PinmapLayerFindClosest
Return the record closest to a given point.

PinmapLayerSetObject
Change the object used by a pinmap layer.

RasterLayerGetObject
Set a raster variable to point to the raster object from a RasterLayer.

RasterLayerSetObject
Change the object used by a raster layer.

RegionLayerGetObject
Set a region variable to point to the region from a RegionLayer.

RegionLayerSetObject
Change the object used by a region layer.

TINLayerGetObject
Set a TIN variable to point to the TIN object from a TINLayer.

TINLayerSetObject
Change the object used by a TIN layer.

VectorLayerGetObject
Set a vector variable to point to the vector object from a VectorLayer.

VectorLayerSetObject
Change the object used by a vector layer.

View3DAddSimpleControls
Add simple viewpoint controls to 3D view.

ViewDrawPinmapElement
Draw a single pinmap element.

ViewGetTransMapToView
Get the transparm to translate between a map projection and view coordinates.

ViewSetGPS
Set the GPS source for a view.

ViewZoomToGroup
Zoom so that a given group fills the view.

ViewZoomToLayer
Zoom so that a given layer fills the view.

Drawing Functions. (3)

GetColorPixel
Return a pixel value given a color.

GetNamedColor
Return a COLOR given a color name from rgb.txt.

GetNamedColorPixel
Return a pixel value given a color name from rgb.txt.

Widget Function. (2)

CreateOptionMenu
Create an XmOptionMenu widget.

CreateHTMLWidget
Create an HTML widget.

Conversion Functions. (5)

ConvertCMYKtoRGB
Convert Cyan-Magenta-Yellow-Black to Red-Green-Blue.

ConvertHBStoRGB
Convert Hue-Brightness-Saturation to Red-Green-Blue.

ConvertHSVtoRGB
Convert Hue-Saturation-Value to Red-Green-Blue.

ConvertRGBtoHBS
Convert Red-Green-Blue to Hue-Brightness-Saturation.

ConvertRGBtoHSV
Convert Red-Green-Blue to Hue-Saturation-Value.

String Functions. (4)

tolower$
Convert string to lower case.

toupper$
Convert string to upper case.

GetToken
Get a token from a string.

NumberTokens
Count number of tokens in a string.

Georeference Functions. (4)

CreateControlPointGeorefFromGeoref
Create a georeference subobject from control points using an existing georeference.

GeorefSetProjection
Set the projection of a Georef.

ReadControlPoints
Read the control points of the last used georeference attached to an object.

Write ControlPoints
Write control points to the last used georeference attached to an object.

File Functions. (3)

GetDirectory
Get a Directory.

CopyFile
Copy a File.

GetInputTextFile
Open a text file for input via dialog.

GetOutputTextFile
Open a text file for output via dialog.

Classify Functions. (20)

RasterClassifyAdaptiveResonance
Adaptive resonance (neural net) classification without mask raster.

RasterClassifyAdaptiveResonanceWithMask
Adaptive resonance (neural net) classification with mask raster.

RasterClassifyFuzzyCMean
Fuzzy C Means classification without mask raster.

RasterClassifyFuzzyCMeanWithMask
Fuzzy C Means classification with mask raster.

RasterClassifyISODATA
ISODATA classification without mask raster.

RasterClassifyISODATAWithMask
ISODATA classification with mask raster.

RasterClassifyKMeans
K Means classification without mask raster.

RasterClassifyKMeansWithMask
K Means classification with mask raster.

RasterClassifyMaxLikelihood
Maximum Likelihood classification without mask raster.

RasterClassifyMaxLikelihoodWithMask
Maximum Likelihood classification with mask raster.

RasterClassifyMinAngle
Minimum distribution angle classification without mask raster.

RasterClassifyMinAngleWithMask
Minimum distribution angle classification with mask raster.

RasterClassifyMinDistanceToMean
Minimum distance to mean classification without mask raster.

RasterClassifyMinDistanceToMeanWithMask
Minimum distance to mean classification with mask raster.

RasterClassifySelfOrganization
Self organization (neural net) classification without mask raster.

RasterClassifySelfOrganizationWithMask
Self organization (neural net) classification with mask raster.

RasterClassifiyStepwiseLinear
Stepwise linear classification without mask raster.

RasterClassifiyStepwiseLinearWithMask
Stepwise linear classification with mask raster.

RasterClassifySuitsMaxRelative
Suits’ maximum relative classification without mask raster.

RasterClassifySuitsMaxRelativeWithMask
Suits’ maximum relative classification with mask raster.

Network Functions. (93)

NetworkAllocatedCenterGet
Get allocated center node from position.

NetworkAllocatedCenterGetColor
Get allocated center color.

NetworkAllocatedCenterGetNumber
Get number of allocated centers.

NetworkAllocatedCenterGetPosition
Get allocated center position given a node.

NetworkAllocationGetReport
Get allocation report.

NetworkAllocatedLineGetNumber
Get number of allocated lines.

NetworkAllocatedLineGetPostion
Get allocated line position.

NetworkAllocationClose
Close an open allocation handle.

NetworkAllocationGetResultPositionList
Get allocation position list.

NetworkAllocationRecoverCenter
Get center handle from allocation handle.

NetworkAngleApply
Apply angles.

NetworkAngleGetImpedance
Get impedance for an angle.

NetworkAngleSetImpedance
Set impedance for an angle.

NetworkCenterAddCenter
Add a center at a node.

NetworkCenterCalculateAllocationIn
Calculate allocation in.

NetworkCenterCalculateAllocationOut
Calculate allocation out.

NetworkCenterCloneHandle
Duplicate a center handle.

NetworkCenterClose
Close an open center handle.

NetworkCenterDeleteAllCenters
Delete all centers.

NetworkCenterDeleteCenters
Delete specific centers.

NetworkCenterGet
Get center node given position.

NetworkCenterGetCapacity
Get center capacity.

NetworkCenterGetCentersList
Get list of centers.

NetworkCenterGetColor
Get center color.

NetworkCenterGetImpedanceDelay
Get center impedance delay.

NetworkCenterGetImpedanceLimit
Get center impedance limit.

NetworkCenterGetNumberCenters
Get number of centers.

NetworkCenterGetPosition
Get center position given node.

NetworkCenterSetCapacity
Set center capacity.

NetworkCenterSetColor
Set center color.

NetworkCenterSetImpedanceDelay
Set center impedance delay.

NetworkCenterSetImpedanceLimit
Set center impedance limit.

NetworkClose
Close a (main) network handle.

NetworkInitCenter
Create a center handle.

NetworkInitStop
Create a stop handle.

NetworkLineGetDemand
Get demand for a line.

NetworkLineGetDirectionState
Get line direction state.

NetworkLineGetImpedance
Get impedance for a line.

NetworkLineGetName
Get line name.

NetworkLineGetNodeForm
Get the node a line is coming from.

NetworkLineGetNodeTo
Get the node a line is going to.

NetworkLineGetNumberLines
Get number of lines.

NetworkLineSetDemand
Set demand for a line.

NetworkLineSetDirectionState
Set line direction state.

NetworkLineSetImpedance
Set impedance for a line.

NetworkNodeGetBarrierState
Get barrier state for a node (Boolean).

NetworkNodeGetName
Get node name.

NetworkNodeGetNumberNodes
Get number of nodes in the network.

NetworkNodeSetBarrierState
Set barrier state for a node.

NetworkReadAttributeTable
Read an attribute table.

NetworkRouteClose
Close an open route handle.

NetworkRouteGetLine
Get a line from a position.

NetworkRouteGetLineDirection
Get line direction.

NetworkRouteGetNode
Get a node from a position.

NetworkRouteGetNumberOfLines
Get number of lines in a route.

NetworkRouteGetNumberOfNodes
Get number of nodes in a route.

NetworkRouteGetReport
Get route report.

NetworkRouteGetResultLineList
Get route result as line list.

NetworkRouteGetResultNodeList
Get route result as node list.

NetworkRouteGetResultPointList
Get route result as points.

NetworkRouteIsNodeStop
Is a node a stop (Boolean).

NetworkRouteIsNodeTurn
Is a node a turn (Boolean).

NetworkRouteRecoverStop
Get stop handle from a route handle.

NetworkSetDefaultAttributes
Set default attributes for a network.

NetworkStopAddStop
Add a stop.

NetworkStopCalculateRoute
Calculate a route from a stop handle.

NetworkStopCloneHandle
Copy a stop handle.

NetworkStopClose
Close an open stop handle.

NetworkStopDeleteAllStops
Delete all stops.

NetworkStopDeleteStops
Delete specific stops.

NetworkStopGet
Get a stop node given its position.

NetworkStopGetDemand
Get demand for a stop.

NetworkStopGetStopsList
Get the stop list for a stop handle.

NetworkStopMove
Move a stop (change stop list order).

NetworkStopSetDemand
Set demand for a stop.

NetworkTableIsTable
Is a table of given type and name part of network (Boolean).

NetworkTableSetLineNameAsTableAndField
Use table to set line names.

NetworkTurnGetAngle.
Get turn angle.

NetworkTurnGetImpedance
Get turn impedance.

NetworkTurnSetImpedance
Set turn impedance.

NetworkWriteAttributeTable
Write an attribute table.

NetworkGetNumberTables
Get number of tables of given type.

NetworkGetTablename
Get specific table name.

NetworkAllocationGetType
Get type of allocation.

NetworkAllocatedCenterGetNumberLines
Get allocated center number of lines.

NetworkAllocatedCenerGetCapacity
Get allocated center capacity.

NetworkAllocatedCenterGetDemand
Get allocated center demand.

NetworkAllocatedCenterGetImpedanceLimit
Get allocated center impedance limit.

NetworkAllocatedCenterGetImpedanceDelay
Get allocated center impedance delay.

NetworkAllocatedCenterGetMaximumImpedance
Get allocated center maximum impedance.

NetworkAllocatedCenterGetAverageImpedance
Get allocated center average impedance.

Database Functions. (6)

OpenDatabase
Open a main level database.

TableCopyToDBASE
Copy a database table to a DBASE file.

TableKeyFieldLookup
Find the first record in a table that matches a given key.

TableReadFieldNum
Read a number from a table (using DBTABLEINFO).

TableReadFieldStr
Read a string from a table (using DBTABLEINFO).

TableWriteRecord
Write values to an existing database record.

Database Edit Functions. (5)

DBEditorModalSingleRecordView
Pop up a modal dialog to edit a database record.

DBEditorSingleRecordWidgetCreate
Create XmForm with controls to edit a database record.

DBEditorSingleRecordWidgetSaveChanges
Save changes to a record in a single record view.

DBEditorSingleRecordWidgetSetField
Set the value of a field in a single record widget.

DBEditorSingleRecordWidgetSetRecord
Load a record using a single record widget.

Tool Functions. (2)

ToolSetGPS
Set the GPS source for a tool.

ViewCreateToolBoxTool
Add the ToolBox tool for a view.

GPS Port Functions. (4)

GPSClose
Close a GPS Port.

GPSGetSourceName
Return the name of a GPS source.

GPSNumSources
Return number of GPS sources configured.

GPSOpen
Open a GPS port.

Miscellaneous Functions. (5)

AreaCorrelatePoint
Adjust point position of Raster to match known position in reference Raster.

ComputeRasterProperties
Compute Raster properties for vector.

CreateProjectFile
Create a blank project file.

ResizeArrayClear
Resize an array (clears all values to zero).

ResizeArrayPreserve
Resize an array (retains values).

* Classes. (1)

The GPS class now has a callback so that your script can be notified when the GPS moves.

A HyperText Markup Language (HTML) view can be created by an SML script by using a new class.  Use this with your HTML script to provide instructions to the user of your script.  Additional information on what you can do in your script with this widget and a demonstration of its use is provided via the “Instructions” icon in the sample Data Logger APPLIDAT installed as part of V6.00.

class is XmHTML      (an HTML viewer widget)

Alignment : String
ScrollBarPlacement : String
MarginHeight : Number
MarginWidth : Number
ResizeHeight : Number
ResizeWidth : Number
HorizontalScrollBar : Number
ScrollBarDisplayPolicy : String
MimeType : String
Text : String

New Sample Script.

Complex geospatial analysis can be accomplished using SML scripts.  These scripts can incorporate interaction with the user, complex views, GPS inputs, raster and vector combinations, and so on.  The sample APPLIDATs that have been released demonstrate some of these capabilities but require some programming skills to create.

A simple sample script demonstrating raster based GIS capabilities of SML is in the script exchange at www.microimages.com/sml/repository/coastal_bays/coastal.sml.   Its implementation was funded by the Maryland Department of Natural Resources for use in their state’s GAP analysis program.  A color plate is attached entitled GAP Analysis with SML to describe its special application for wildlife habitat assessment in the U.S. GAP analysis program.  You can use this script as a model to implement your own rule based raster geospatial analysis.

This script is simple to modify because it has no interactive user inputs, uses only rasters, and has a simple rule based structure.  It uses 4 rasters as input:  a cluster image obtained from combining 2 Landsat TM images (from spring and fall); a wetlands inventory map; a soil map; and a general land-use/land cover map.  It applies a series of rules or tests to combine these rasters into a vegetation type map which is related to the condition of wildlife habitat.

Modifications since V6.00 CDs.

Slope and Aspect.

The general TNTmips process for computing slope and aspect has been rewritten to reorganize it into a functional form.  This revision has provided the basis to add functions into SML for these raster transformations.

Surface Modeling.

The TNTmips surface modeling process has been rewritten to reorganize it into a functional form.  As a result, the surface fitting processes are also available as SML functions.

Raster Import/Export.

Many of the TNTmips raster import/export processes have been reorganized into functional form and now also occur in SML.

Future Plans.

A simpler, alternative, ArcView combined layer control and legend panel is scheduled for the TNT products and will become available for use in SML.  The incorporation of this new view panel is underway now.

An expanded form template procedure is being added to control data entry in the TNT products, and SML in particular.  Part of this procedure is provided in V6.00 and can be seen in the design of its dialog boxes.  It will enable more user friendly forms (in place of dialogs) for field data logging and database creation and editing.

* Data Logger APPLIDAT.

Introduction.

A myriad of GPS data logging programs and devices already exists.  None of these incorporate the unique capabilities of TNTatlas to organize, retrieve, and display a mixed variety of geodata.  Few work with images, especially large images, maps, and/or graphics.  Few provide the ability to integrate the interactive selection of positions from a GPS or image/map interpretation.  Few can be easily customized to provide a carefully controlled, but unique attribute input.

MicroImages has developed a sample Data Logger APPLIDAT having all these capabilities with the SML geospatial programming language.  It is provided as a model for your modification and reuse.  This script and associated sample TNTatlas are small and will even operate within TNTlite.  Both are automatically installed with any V6.00 TNT product and will show up as an APPLIDAT icon on your desktop.  If you wish to substitute your large TNTatlas, simply do so as outlined below in the section:  Using your own TNTatlas.  Altering the attribute entry forms, and therefore the associated tables collected, is a relatively easy modification to this script.  If the comments in the script are inadequate to show you how to make these substitutions and alterations, please call software support for help.

This APPLIDAT is deliberately quite different than the first sample Biomass APPLIDAT provided with V5.90.  It is MicroImages’ role to develop a variety of APPLIDATs with differing objectives that test the capabilities of SML, which this one certainly did.  MicroImages then adds those functions needed to complete the project.  In this fashion, we not only provide samples but enrich SML, enabling its use for diverse kinds of custom geospatial analyses.

Four color plates are attached to illustrate the new Data Logger APPLIDAT.  They are entitled:

  Data Logger APPLIDAT
  Data Logger—Selecting a Position
  Data Logger—Adding New Records
  Data Logger—Editing Records

Specific Problem.

This Data Logger APPLIDAT was developed jointly with the Maryland Department of Natural Resources.  They will apply it in the field using participants in the AmeriCorps program started by President Clinton.  In this program, those in need of a job work for 9 to 10 months on a national service project for a minimum wage and earn college tuition credits.  MDNR furnishes the oversight and logistics support for one of Maryland’s AmeriCorps projects.  These participants annually walk, wade, or boat all the stream channels in Maryland and document sources of potential environmental problems.

This Data Logger APPLIDAT is designed specifically to enable the AmeriCorps teams to use the 1 meter color-infrared orthophotos and scanned 1/24,000 topographic maps that cover all of Maryland.  Via the Data Logger, these electronic materials can be taken into the field and used with or without GPS positions to automate this process.  Here-to-fore, this operation has used paper maps and map reading, computer prints of the orthoimages, keypunching and verification, and database import procedures to get this statewide survey data into TNTmips.

For field operations, MDNR will now abstract a local portion of their TNTatlas of the state (for example, a county).  These smaller TNTatlases will be loaded onto the hard drives of portable Data Logger tablets.  The field team will then check out a GPS device and a Data Logger set up for the current area of their field survey.  TNTatlas has been modified so that it will now accept and support the use of a GPS device to assist in locating the general area of operations on any layer.  In flat coastal areas with few distinct features, it is difficult, or at least time consuming, to find your current position on a detailed topographic map or 1 meter resolution color-infrared orthoimage.  When the unit is returned, the database tables will be transferred from it into TNTmips.

A flier distributed by the Maryland Department of Natural Resources at a recent state technology meeting is enclosed.  It summarizes their other planned applications of variations of Data Logger APPLIDATs.  MDNR plans to alter and use this Data Logger in a variety of field operations exploiting the more than 100 gigabytes of geodata currently in their state-wide TNTatlases.

Iterative Design.

First Prototype.

The design of the MDNR stream survey required the collection of 11 different types of stream environmental conditions, each in its own relational table:

  • Pipe Outfall  (19 descriptive parameter fields)

  • Exposed or Leaking Pipe (18)

  • Channelization (20)

  • Fish Barrier  (15)

  • Erosion Site (16)

  • Inadequate Buffer (23)

  • Unusual Condition (10)

  • Trash Dumping (14)

  • In or Near Stream Construction (13)

  • Road or Railroad Crossing (22)

  • Representative Site (28)

The first prototype SML script used about 400 lines to create each of these 11 tables and to graphically collect the user inputs for each of the 11 types of record.  Thus, the original draft implementation of this APPLIDAT had over 5000 lines, 90% of it devoted to dialog boxes.  A lot of time was required to script each data entry dialog.

New SML Tools Needed.

The first objective of creating this sample APPLIDAT was to provide MDNR and you with a general Data Logger design which could be adapted locally to other field applications.  It was obvious that the first prototype would not be a good sample script that you could easily modify to collect some other kind of field data.  This required immediately addressing the second objective in creating APPLIDATs:  to improve the flexibility and utility of the SML geospatial programming language to meet the demands you place upon it.  Specifically, a new and efficient means was required to create custom dialog boxes to fill in database records.

The best approach to setting up tables in a Data Logger application is one where its creator, or anyone who modifies the script, does not have to write a complicated script to define a record input dialog and therefore that table.  It was also concluded that various processes involved in the collection of records in TNTmips would also benefit from using interactive creation of the dialog boxes used to complete each kind of attribute table.  As a result, the requirements of this APPLIDAT sent us back into adding features into TNTmips which have been on the new feature list for some time.  These were the interactive creation of table entry forms and the incorporation into them of the filters or constraints needed to control the accuracy of the data entry into each field.

General TNT Requirements.

Think of this need in TNTmips as follows.  You plan an extensive campaign of geodata collection by interpreting orthophoto or satellite images with the object editor.  You wish to have several technicians carry out this program or you will conduct it.  First you design the kind of point, line, and polygon features you wish to interpret from these many images:  road types, lakes, forest stands, and so on.  Then you decide upon the specific attributes you wish to interpret and record for each graphical element.

Now you are faced with setting up an attribute record and associated relational table for each different type of feature you wish to interpret.  Assume you are planning carefully and the actual interpretation will be done by others (or you want to impose some rigor upon yourself?).  You certainly want to carefully filter or constrain the attributes that will be entered into each record to insure their accuracy, eliminate ambiguity, permit their accurate selection in subsequent geospatial operations, allow analysis in a spreadsheet or statistical package, and all those other reasons you need to get it right in the first place and not by extensive subsequent testing and editing.

Typical Use of Constraints.

In this photointerpretation example, assume one of your line element interpretations will be for roads.  One of the fields recorded for roads will be a string field to identify the type of road:  dirt, improved dirt, asphalt 2-lane, concrete 2-lane, divided, ...  You could let the interpreter fill in a string, but they might spell it wrong or not consult the instructions to pick one of your mandatory choices.  With constraints, you can design this field to be multiple choice and when the field is selected for data entry, the list of choices will pop-in and the interpreter must select one of them before they can proceed.

Suppose a field is to contain the estimated width of the road.  The interpreter could be asked to fill in any kind of number or select from a reference list.  But, with constraints, a list of widths can be incorporated as multiple choices.

Suppose the county in which the road occurs is another field.  Pop-in a list of county names, and a lot of spelling errors will be eliminated.  Another field requires a judgment if the road is in good condition or not, and the choice of yes or no is presented.

All these uses of constraints involve the incorporation of multiple choices.  However, suppose you do not wish to use a multiple choice for a numeric field where a more accurate number may be recorded.  In this situation, you can at least set a constraint on the type and range of the numbers which will be accepted.  As a result, you will not get roads recorded that are 3000 or 3 feet wide because the decimal point has been miss-positioned or omitted.

New TNT Procedures.

Once you have interactively designed an attribute record with constraints in TNTmips, it will set up the appropriate new table, control the presentation in the data entry form (for example, in the single record dialog box or multiple record table), and constrain the entry of attributes into this form.  This table description can be used in a project, saved, reused, and moved into an SML script.  In the Data Logger APPLIDAT, each table is set up, and records added to it, by a form designed in TNTmips and simply inserted into the script.  In this fashion, TNTmips’ table forming and data entry is being improved.  A list of the constraints being added to assist you in the design of your attribute tables can be found above in the New Features/Database Constraints section.

Current Prototype.

This new data entry form design is not yet an easily used, complete feature of the TNT products, but priority effort is continuing to complete it.  The Data Logger does clearly illustrate the use of this concept in the field with a GPS unit to collect point element attributes.  Its data entry forms, and thus its tables, were designed in TNTmips using the new data entry constraints.  This eliminated 4000 lines of script, reduced this APPLIDAT to 500 lines of script, and greatly simplified your modification of it for other uses!  Contact MicroImages to track and obtain the latest advances in the use of these new data entry features in TNTmips post V6.00.

Hardware.

MDNR has planned to use a W95 based hand-held slate for the hardware used with the Data Logger.  When you try this APPLIDAT, you will find that with a little further modification of its data entry procedures, only a stylus will be needed in the field and no keyboard is required.  A copy of a commercial flier is enclosed describing the Fujitsu Stylist slate selected for this application.  It is a very well designed field unit with all the necessary features and works well with the Data Logger.  Its only significant drawback is that its color screen cannot be viewed in the direct sun.  MDNR is currently in the process of searching for other tablets and discussing the possible incorporation of direct daylight reading screens with Fujitsu.  The latest data-slate designs also incorporate a small cellular phone keypad at the side of the screen.  Like cell phones, these keys can be used to directly enter numbers.  They support character entry through the use of 2 or 3 keystrokes, which is more than adequate for a well designed field survey that uses multiple choice and access to predefined reference tables.

Operation.

The operation of the Data Logger APPLIDAT is discoverable throughout.  It has several subsections.

Using the Atlas Subsection.

The Data Logger script automatically starts the TNTatlas provided with it.  The operator then navigates in TNTatlas to the level which contains the image or map to be used for the current area of data collection.  DataTips are provided for each layer in the atlas to advise the new user how to navigate through it.  In this sample, they guide the user to the bottom or third level in the atlas which contains the color-infrared orthoimages.

The SML script does not contain the HelpTips presented in the TNTatlas.  HelpTips can be added to any TNTatlas by placing them in a string field.  Then add a vector overlay layer to each atlas layer containing a polygon around the area to which they apply.  Then simply attach the record with that string field to the polygon and expose it as you would with any other DataTip.  This is a round about way of creating a HelpTip, but it can actually expose HelpTips unique to subareas in a single atlas view.  The first or county layer in the TNTatlas used will provide 2 different HelpTips, depending upon where the cursor is on this image layer.

At any level, the TNTatlas can be suspended, and the data entry procedure can be entered by selecting the “Point” icon.  At any time, the TNTatlas can be reentered and used to navigate to a new location.  The TNTatlas automatically installed with your copy of the Data Logger has only 3 sample levels and is very tiny to fit within the limits of TNTlite.  Each of the 3 atlas layers will fit entirely within a single TNTlite view.  However, MDNR has made available a much larger sample TNTatlas of this same area.  Should you wish to use, experiment with, and demo this larger TNTatlas, MicroImages will send it to you on a CD.

Using your own TNTatlas.

As a TKP.  The simplest way to use your own TNTatlas is to change the name of the sample atlas Project File installed as part of this APPLIDAT.  Delete the atlas file “datalog.rvc” and you will turn the APPLIDAT into a Turn-Key-Product (TKP) which expects you to provide its geodata.  When you click the datalog.sml icon, the script will not be able to find its TNTatlas and will present the standard TNT Object Selection window.  Use it to navigate to your TNTatlas and select it.  The TKP will then continue to operate just as outlined for the APPLIDAT, except the HelpTips will not be shown in the TNTatlas unless you incorporate them as string fields as outlined above.

You do not have to use a TNTatlas in the TKP mode.  You can also use the Object Selection window to select any raster object.  In this case, the TKP will display this raster and put you directly into the Data Logger mode ready for point input.

As an APPLIDAT.  You can easily change this Data Logger APPLIDAT to start up automatically using your TNTatlas.  Simply edit the SML script and change the filename and object name to that of the starting layout in your TNTatlas (in other words, the layer you wish to start with).  This is one of the first lines in the script, and its function is explained in the comments.  You may also need to change the name in the script of the projection used in your atlas, as the sample atlas provided is in UTM.

GPS input.

A GPS device can be started up before or during the operation of the APPLIDAT.  Many new features had to be added to TNTmips and SML to assist the user of the Data Logger in activating it and locating its current position in the atlas.  The GPS unit has several states which must be understood and brought to the attention of the Data Logger user.  The user also must be made aware of the situation of the GPS.  Some of the possible parallel states of the GPS are:

  • turned off or no battery

  • turned on, but no coordinates being recognized

  • current position is off the extent of the current layer(s) being viewed

  • current position is in the extent but not in the area currently viewed

  • current position is about to go out of the view or extent

  • current position is within the view area, and so on

This APPLIDAT introduces the use of different color HelpTips with varying content to help alert the operator as to the state of the GPS unit.  These HelpTips and their timing and color are in a single attached file, not scattered through the script, so that they can be easily created and edited.

Picking a Position.

Anyone logging features in Maryland can obtain the 1 meter color-infrared orthophotos to do so.  Obviously, if you can detect a feature of interest on these orthophotos, pointing to it will provide more coordinates than a GPS position which is not corrected (DGPS).  But whether the position is derived from the GPS or viewed position, its observed characteristics can be entered as attributes.  When a DGPS device is available, logging coordinates from either accurate source is required.  Combinations of a DGPS and orthophotos, or at least georeferenced satellite images and a GPS unit, can be set up anywhere in the world for use in field data logging.

Examine the MDNR stream logging operation as an example.  Some environmental features such as a bridge or culvert can be accurately logged by standing on them with a DGPS unit.  However, other field situations arise such as fish barriers or steep eroded banks which cannot be occupied.  In these cases, an indication of the feature or its approximate location can be selected on the color-infrared orthophoto and its accurate position selected by the cursor.  This Data Logger APPLIDAT provides the capability for its user to make this kind of “on-the-spot” decision.

One of the unique features in this APPLIDAT is the ability to select the coordinates of the position to be logged as a point selected in the view or as the current GPS position.  An integrated cursor positioned and GPS positioned crosshair gadget provide this.  When you select the “Point” icon to begin data logging, a single combined gadget indicates your position on the view.  It will track the current GPS position and can be used to log the coordinates of the point on which you are standing.  However, at any time, you can select an image feature with the cursor, and a portion of this gadget will move to it and the coordinates of the position of that feature in the view will be logged.  This position on the view can be fine tuned with the arrow keys.  The GPS portion of the gadget is left behind and continues to track your actual GPS position.  At any time, you can snap the feature selection portion of the gadget back to the GPS portion using the “Snap Back” icon in the toolbar of the view.

From the above list of GPS states, you can also see that a number of other kinds of conditions can arise in the field setting.  This integrated feature selection and GPS crosshair gadget supports all these different states.  For example, if the GPS is turned off, or is on but not providing coordinates, the GPS portion of the gadget will be absent, and you will also be informed of these conditions by the HelpTips.  If the GPS coordinates are red but are off the edge of the screen, only the cursor portion of the feature selection gadget will be shown, and a color arrow will point off the view to the coordinate position of the GPS and track its movement.

Selecting a Class of Features.

When you use the “Point” icon to transfer from the atlas navigator into the logging activity, a toolbar window will pop in so you can identify its type (in other words, its table).  In this Data Logger, this toolbar window presents 11 icons and associated ToolTips.  Each icon represents a stream condition and its associated attribute table.  When you select one of these icons with the cursor, the form required to fill out that table will pop in as a window.

Completing a Feature’s Record.

All the 11 forms use the new constraints procedures outlined above.  The current coordinates of the position gadget are automatically filled into the form but can also be manually entered or edited as well.  No default values are provided, and this is deliberate.  Defaults mean that an inattentive or lazy user will not correctly complete each field.  Multiple choice fields start out showing a question mark and all numeric fields are empty.  Every field must be completed with the surveyor’s best estimate.

Most of the fields are filled in by clicking on them and then selecting from the multiple list this provides.  Numeric values can be entered only in the ranges specified.  When no keyboard is provided such as with the Fujitsu slate, these multiple choices are easily made with the stylus.  Numeric fields can be filled out with the stylus and a software keyboard.  Any entry can be edited at any time.  Entries can be completed or edited in any order.

This is a carefully designed stream survey.  Choices such as “Other” are not provided as these produce ambiguous and unusable results.  Only after all the fields have been selected and filled in, including all numeric fields, will the OK button become active.  This OK button will then close the form, create this data point record, append it to the corresponding table, and plot a color symbol at the corresponding position in the view.  At any time prior to selecting OK, the Cancel button will close the form and no record will be added to the table.

Editing an Existing Location.

Editing of an existing record is quite easy at any time during logging or several days later.  Simply move the feature selection gadget onto that record’s graphical symbol on the view.  Then click the right mouse button (or hit the return key on a Mac).  This will locate the corresponding record in the corresponding table.  The form for that record will then be opened and filled out with the contents of the selected record.  Any field in the form can then be edited just as if a new was record were being created.  The OK button will replace the selected record with this new, altered record.

At any time, when an existing record is open in the form for editing, it can be removed entirely with the “Delete” button.  During edit, the coordinates of the record can be altered by directly editing them.  Since the record’s symbol is also selected on the view and open for editing, its positions can be interactively adjusted.  Moving the device controlling the cursor (stylus, pointer, mouse, ...) will move the position gadget.  The new position can then be reset using the left button.

Instructions.

This Data Logger APPLIDAT uses the new HTML (HyperText Markup Language) function in SML to create the contents of its sample instruction script.  However, real operational instructions are not included, as MDNR will create those instructions appropriate to their particular field activities and typical operator skills.  You should not need instructions to operate this Data Logger, as its operation is discoverable and uses extensive HelpTips.

The skeletal instruction script provided is a sample into which you can easily incorporate attractive instructions for your modified Data Logger.  Simply create the instructions you need in Microsoft Word and save them as HTML formatted text and graphics, or use any other HTML editor.  Then substitute your instructions into this script.

Since a script will interpret and display HTML, you can incorporate any graphics, icons, format, layout, and other features that HTML supports to make your instructions attractive and easy to follow.  As yet, you cannot provide links from your instructions or other scripts into another script such as a Data Logger or to a web site to get further instructions or data.  However, this capability can be added to this HTML interpreter for use in SML scripts in a future iteration.

* Hyperspectral Analysis.

Getting Started Booklet.

A Getting Started tutorial booklet is provided by Dr. Randall Smith to summarize all the rudiments of all the hyperspectral analysis procedures in TNTlite 6.0.  Dr. Smith is now charged with the creation of a second companion booklet that will summarize the concepts of hyperspectral imaging for beginners.  At present, this field of remote sensing is complicated, and techniques are evolving rapidly.  A clear set of step by step procedures cannot be offered, as your objective, source of imagery, and associated control information are unique.  You probably will need to seek other references and help from experienced parties in your first attack on exploiting this kind of imagery.  Remote Sensing:  Models and Methods for Image Processing, by Robert A. Schowengerdt, 1997,  Academic Press, 522 pages, is a very good technical reference on remote sensing to add to your bookshelf.  It also contains several sections on hyperspectral imaging concepts.

Use in TNTlite.

The resolution limits on raster objects used in TNTlite 6.0 have been slightly increased to accommodate the 614 by 512 pixel size of an AVIRIS image.  Remember, it is the product of these numbers that controls the raster size, so the former limit of 640 by 480 pixels is still accommodated.  Since the number of raster objects is not controlled, a complete AVIRIS image with all its spectral bands can now be analyzed in TNTlite 6.0.

Recently, after the publication of this increase in TNTlite 6.0 size, the manager of the AVIRIS program contacted MicroImages relative to the new 614 by 512 pixel size limits.  He indicated that the low altitude images will become larger than 614 by 512 pixels when distributed.  This happens when nearest neighbor resampling is applied during the georectification correction process.  These images have much improved internal geometry but get wider than 614 pixels due to the irregular edges which are created.

This poses no particular problem for TNTlite, as while the images are larger, the spatially irregular edges created in the resampled images are not usable.  During import from the AVIRIS or ENVI formats, TNTlite 6.0 can select the 614 by 512 pixel usable area and omit these ragged edges from the RVC file used for subsequent processing.  The TNTmips user does have the option to import all the area of enlarged image with null cells in the irregular edge strips created outside the usable image cells.  However, this provides no advantage over TNTlite, as these areas are useless anyway.  Further adjustments for this effect will be made in TNTmips and TNTlite as needed.

Importing Spectral Curves.

Spectral curves can be imported from plain ASCII text files, as well as exported in that simple format (saving to the text file was in 5.9).  This simple text file format allows the users to put comment lines in any part of the file.

At the request of individual clients, import procedures have been provided for their particular field reflectance and radiance spectral curves.  For example, the raw radiance curves can be imported as data numbers from the Spectron 590 portable spectroradiometer format or from simple ASCII values.  In order to properly deal with such radiance curves, additional procedures have been supplied for calibration, combination, and analysis of your curve into useful reference libraries.  If you have spectral curves you wish to import, please contact MicroImages and be prepared to supply sample curves and their file format.

Principal Component Analysis (PCA).

Hyperspectral images incorporating many narrow adjacent spectral bands have a higher degree of redundancy than broader band spectral images such as Landsat or color-infrared photographs scanned to RGB.  For some kinds of applications, it is appropriate to use principal component analysis to reduce spectral dimensionality before proceeding (in other words, reducing the number of images before further visualization or processing).  You can now use this process, where appropriate, to compute any desired number of principal components and view a plot of eigenvalues for all these new components.  You can also view eigenvectors for any individual component or input band and component variance plots.  The statistics computed in this process can be saved as an RVC subobject with the input images for review and subsequent reuse in other processes.

Minimum Noise Fraction Transform (MNFT).

For many reasons, hyperspectral images can have differing amounts of noise in each spectral band.  As a result, their analysis with standard principal components may not show the usual trend of steadily increasing noise with increasing component number.  The Minimum Noise Fraction Transform (MNFT), a modified version of PCA, has been provided.  It computes components with a general logic similar to principal components but also insures that each new component has a decreasing signal to noise ratio.  When MNFT is used, the low order components (1st, 2nd, 3rd, ...) will be almost noise free.  An additional graphical presentation provides the amount of noise variance contribution as a function of input band (wavelength).  The statistics computed in this process can be saved as an RVC subobject with the input images for review and subsequent reuse in other processes.

Hyperspectral Explorer.

It is currently popular to portray hyperspectral images as the edge of a color cube.  The top single spectral band image is presented in some enhanced color scheme.  Each additional spectral band is viewed edge-on using the same color enhancement as the top image.  This pseudo-3D hypercube presentation gives you an introduction to the concept of hyperspectral images.  It has no value at all in helping you select your optimal view of hyperspectral images in three bands in RGB color space.

The Hyperspectral Explorer, unique to TNTmips, moves you beyond this popular, simple cube-edge portrayal of hyperspectral images, which has limited value in visualization and related analysis.  It is designed to help you visualize hyperspectral images in the limited RGB space of human vision.  There are (256x255x254)/6 = 2,763,520 combinations of 3 spectral bands selected from 256 spectral images.  Each of these combinations could be displayed in 6 possible color schemes in your RGB view (for example RGB GBR, ...) for a total of 16,581,120 color permutations.  The Hyperspectral Explorer helps you rapidly test a subset of all these possible views to find the RGB combination of three bands which renders an optimal visual display.  It animates the process by cycling you rapidly through a logical sequence of combinations.  At any point, you can select that combination which best portrays the features of interest in the hyperspectral image of your unique site.  A color plate is attached entitled Hyperspectral Explorer to illustrate this new procedure.

The Hyperspectral Explorer also presents a color graphic window with a quick overview of all RGB band combinations that could be created using its current settings (in other words, its current interband intervals).  The first line drawn horizontally across this graphic is a single horizontal (or vertical) line in the first 2D image and is rendered with matching colors.  The next line and successive lines in the graphic show the same line in the image but use the next successive set of spectral band combinations for the interband interval selected.  For example, assume that the initial 2D view in RGB combines spectral bands 3, 20, and 24.  The first color line in this auxiliary graphics window is the same as the horizontal or vertical line selected in the 2D image.  The next RGB line below in this inspection tool will be bands 4, 21, and 25; the next would be 5, 22, and 26; and so on for the same horizontal or vertical image line.  This 2D color graphic quickly pinpoints those RGB combinations that are most colorful (uncorrelated), those that are rather gray-toned and uninteresting (highly correlated), and so on.

n-Dimensional Visualizer.

This visualization tool, popularized by another hyperspectral analysis product, is now available in TNTmips.  It is used to animate the visual distribution of hyperspectral pixels in wavelength space.  Each spectral band or component of interest (PCA, MNFT, or other) can be assigned an orthogonal axis in this n-dimensional plot.  Any number of axes can be defined, but using more than 10 to 15 bands (axes) provides a confusing display.  Use a polygon or region to outline an area of interest.  Each pixel in the region will be plotted in this n-dimensional graphic, yielding a cloud of points each correlated back to a pixel in the 2D view.  You can then use various automatic and manual controls to rotate these axes and the cloud of points while searching for clusters of points, extreme pixels, and other interesting distributions of points in this n-space.  Stop the rotation at any point, outline a group of points, color them, and that color is assigned to the corresponding cells in the 2D view.  A color plate is attached entitled n-Dimensional Visualizer to illustrate this procedure.

Tracking more than 3 axes in a 2D view is possible but can be confusing.  The TNT products support several stereo viewers (for example, anaglyph glasses, 3DMAX, Simuleyes, ...).  A stereo option using these viewing devices is available for the n-space plot to increase the discrimination of significant clusters of points.  While computer stereo devices are still crude, they are quite effective to separate clusters of points as they pass back and forth in front of or behind other clusters.  In 2D, these situations can seem to be just the rotation of an irregular cluster of points.

Self-Organizing Map Classifier (SOM).

The Self-Organizing Map (SOM) Classifier is unique to the TNT products and is a new unsupervised classification process which uses a neural network approach to find a best-fit set of 256 class-center spectra (it will be altered to allow greater than 256 classes).  It sets up a 16 by 16 array of neural nodes, each representing one class.  The spectral values for each neural node start out as spectral curves for pixels selected at random.  The spectral curves of all the image pixels are then compared one-by-one in this initial set of 256 nodes.  The node with the most similar spectrum has this spectral value adjusted on a weighted basis to improve the match.  Nodes in this immediate neighborhood in the 16 by 16 array also have their spectra adjusted to a lesser extent.  After many sample spectral curves for individual pixels have been processed,  the values in the nodes converge to a set of class center spectra that approximate the distribution of all image spectra in n-dimensional space.

In the SOM class image, similar class spectra lie close together in the node array, and the greater the area that similar materials cover in the hyperspectral image, the more classes are used to represent them.  This ensures adequate spectral discrimination of different varieties of common, wide-spread materials.  The class image produced does not look like traditional output of other unsupervised classification algorithms, where class numbers are not related to each other.  SOM produces class numbers that reflect proximity of the classes in the spectral space represented by the input images.  Obviously, similar to most other unsupervised image analysis procedures, SOM is not a good approach for searching for materials that may have a unique spectra but occupy a very small area in the image.

The final SOM matrix can be optionally saved and viewed.  A distance raster which has the same dimensions as the SOM matrix can also be saved.  It contains a measure of the distance between each node or class in the matrix and its 8 neighbors.  It can be used to review the aggregation and similarity of the classes.

Auto Correlogram.

This new feature provides an estimation of the spatial-spectral variability of a hyperspectral image on a pixel-by-pixel basis.  It computes an average spectral angle between every pixel and its 8 neighbors (using the standard Spectral Angle Mapper (SAM) algorithm).  It functions as an n-dimensional spectral/spatial filter.  It is insensitive to different illumination factors such as level of irradiance, shadows, varying moisture levels, and so on, as it represents the relative variation in spectral values for a kernel of cells moved over the area of the images.

The output of the process consists of 2 floating point (32-bit) rasters that have the same spatial dimensions as the input hyperspectral image.  They contain the average spectral angle (ASA) and its standard deviation for every pixel in the input image.  Areas with low cell values in an ASA raster represent ground features with low spectral variability, while areas with high values are associated with linear boundary elements such as roads, field boundaries, and human-built structures.

Areas that that do not appear uniform in the ASA raster are probably most interesting, as this analysis reduces the spectral variability of your n-band image set to a single parameter.  The ASA raster creates or enhances surface material transitions that are hard to visualize in any other way.  It provides a quick analysis capability for detecting the boundaries of natural and man-made features that might be similar to the background in most of the spectral bands but differ significantly from cell to cell in some specific wavelengths.  In general, this new approach acts as a very good edge/texture-enhancement algorithm that utilizes all spectral bands of the image.

Miscellaneous.

You can now use a variable averaging window when extracting pixel spectra out of a hyperspectral image.  A color plate is attached entitled Subpixel Spectral Identification to illustrate the results of using Matched Filters and these in situ spectra curves for selected materials.

Modifications Since V6.00 CDs.

Hypercube Object.  All of the design and coding were completed for the addition of a compressed Hypercube raster object to the RVC Project File.  Unfortunately, this was completed too late in the development cycle of V6.00 to allow its safe incorporation into the appropriate processes.  It will be added to TNTmips after the shipment of V6.00.

Local Adaptive Constrained Energy Minimization (LA-CEM).  This new hyperspectral analysis method has been developed by MicroImages for Local Adaptive Matched Filtering.  It is described in detail in the AVIRIS abstract below.

AVIRIS Workshop.

MicroImages’ staff has submitted the following 2 abstracts for papers at the Jet Propulsion Laboratory’s AVIRIS Earth Science and Applications Workshop in early February 1999.

Title.  Free Software for Analyzing AVIRIS Imagery
          by Randall Smith and Dmitry Frolov

Abstract.  TNTlite can perform all popular hyperspectral visualization and analysis procedures of full AVIRIS frames (614 x 512 pixels, all bands).  MicroImages has distributed TNTlite free for the past two years for use on Windows, Mac, and UNIX platforms, but hyperspectral procedures have recently been added.  This presentation will demonstrate TNTlite’s free visualization capabilities such as optimal mapping into color space and n-dimensional visualization.  We will also present AVIRIS analysis results demonstrating TNTlite’s use in spectral classification and unmixing, spectral matching, minimum noise fraction transform, and others.  TNTlite will be distributed FREE at the workshop.

Title.  Locally Adaptive Constrained Energy Minimization for AVIRIS Images (LA-CEM)
by Dmitry Frolov and Randall Smith

Abstract.  The Constrained Energy Minimization (CEM) technique maps the relative abundance of target materials with known spectral signature against an unknown background.  We have developed a Locally Adaptive version of the CEM algorithm (LA-CEM) that enhances the contrast between target and background in the output abundance image, improving automatic detection and classification.  We evaluated the LA-CEM technique using different types of ground cover in AVIRIS images.  The use of locally collected statistics produces a better signal-to-noise ratio in the abundance image and potentially reduces the number of false alarms.

Comparison of Products.

The following is a comparison of the analysis procedures available within current versions of competing products to the best of MicroImages’ current knowledge.  ERDAS is not included, as they have provided only a skeletal hyperspectral analysis procedure within Imagine 8.3.  ERMapper has not distributed any commercial product in this area.  Should you have further information to update or correct this table, please supply it.  If you know of techniques and features in other systems and not in TNTmips, let us know so they can be researched and added.  From this table you can make your own judgment as to which is the superior product for the analysis of hyperspectral images.

Visualization Techniques  TNTmips 6.0 PCI 6.0 ENVI 3.0
n-Dimensional Visualizer Yes No Yes
Hyperspectral Explorer Yes No No
Wavelength Selector V6.00+ No No
Spectral Curve Analysis
Remove Continuum (RC) Yes ? Yes
Spectral Feature Fitting (SFF) Just for curves No Yes
Calibrations
Equal Area Normalization (EAN) Yes No No
Log Residuals (LR) Yes No No
Additive Offset Calibration (AOC) Yes Yes Yes?
Flat Field Correction (FFC) Yes Yes Yes?
Dimensional Reduction Methods
Principal Component Analysis Yes Yes Yes
Minimum Noise Fraction Transform (MNFT) Yes No Yes
Image Analysis Procedures
Auto Correlogram Yes No No
Spectral Angle Mapper (SAM) Yes Yes Yes
Cross Correlation (CC) Yes No No
Linear Spectral Unmixing (LSU) Yes Yes Yes
Matched Filtering (MF) Yes No Yes
Vector Quantification Filtering (VQF) Yes No No
Self-Organizing Map Classifier (SOM) Yes No No
Locally Adaptive Constrained Energy V6.00+  No No
        Minimization (LA-CEM) V6.00+ = available now post V6.00

Updates/Aircraft Hyperspectral Imagers.

AVIRIS.  (Airborne Visible/Infrared Imaging Spectrometer).  This is the second in a series of imaging spectrometer instruments for earth remote sensing.  For more information, start at home page http://makalu.jpl.nasa.gov/aviris.html.

Low Altitude Flights.  The AVIRIS hyperspectral instrument was remounted at the end of the summer off the ER-2 (U-2) aircraft onto a NOAA Twin Otter aircraft to fly a low altitude autumn and early spring campaign.  This is a joint effort between JPL, NASA, and NOAA.

The AVIRIS low-altitude flight site schedule for the autumn and winter period is posted well in advance on the JPL web site.  A significant portion of these flights are for coastal applications of hyperspectral images, an application area of significant interest at the present time.  Please note that all the imagery collected by AVIRIS is available to anyone who will pay the $250 for the preparation of each 8mm distribution tape.  Imagery already in private possession can be reproduced freely.

As part of the autumn 1998 campaign, good quality, low altitude hyperspectral imagery was flown for a number of interesting project sites in the western U.S.  Quick-looks of these images are usually posted within a week for each site.  These initial quick-look postings are distorted, as the effects of aircraft yaw, pitch, GPS track, and so on have not been removed.  New quick-looks are posted to replace them once the images have been georectified at JPL and are ready for distribution, at which time anyone can order them, not just NASA/NOAA experimenters.

In October, imagery was collected of the lettuce and other agricultural fields being used in the NASA sponsored precision agriculture project in which MicroImages is a participant.  These images have already been georectified, new quick looks posted, and will soon be distributed to MicroImages and other project participants.  MicroImages will soon be showing the results of the preliminary processing of these images.  Present AVIRIS flight schedules call for additional flights for this and related projects in California in April 1999.

DAIS (Digital Airborne Imaging Spectrometer 7915).  This is a hyperspectral device built by Geophysical Environmental Research Corp.  It is being operated by DLR (Deutsches Zentrum fur Luft- und Raumfahrt e.V., Institute of Optoelectronics) on a DO 228 aircraft and has been flying experimental sites for European team members.  More information on this program and hyperspectral device can be found at http://www.op.dlr.de/dais/welcom.html.  Imagery from this program is distributed in ENVI format which can already be imported by TNTlite.

IFOV:  3.3 milliradians
Ground Resolution:  10 meters at 1000 meter altitude
Total Scan Angle:  52 degrees

512 pixel swath
79 spectral bands from .45 to 12.00 µm

73 in range .45 to 2.45 µm
6 in range 8.00 to 12.00 µm

CASI (Compact Airborne Spectrographic Imager).  This is a light weight, compact hyperspectral scanner manufactured by ITRES Research Limited, Suite 155, East Atrium, 2635-37 Avenue N.E., Calgary, AB T1Y 5Z6, Canada.  For more information use phone (403)250-9944, FAX  (403)250-9916, email info@itres.com or see their web site at http://www.itres.com/casi/casi.html.  MicroImages has requested format information and sample images from ITRES so that its import can be added to TNTlite.

Total Scan Angle:  44.7 degrees
512 pixel swath
19 spectral bands maximum selected from .40 to 1.00 µm

AISA (Airborne Imaging Spectrometer).  This is a light weight, compact hyperspectral scanner manufactured by Spectral Imaging Ltd., Kaitovayla 1, P.O. Box 110, Oulu  90571, Finland.  For more information use phone (3588)551-5595, FAX (3588)551-4496, email aisa@specim.fi or see their web site at http://www.specim.fi.  Imagery is recorded directly on a rugged portable computer in the air or on the ground.  This is a low cost hyperspectral imager relative to the other commercial hyperspectral imagers.  MicroImages has sample AISA image files and their format documentation, and a direct import into TNTmips is planned.

IFOV:  1 milliradian
Ground Resolution:  1 meter at 1000 meter altitude
Total Scan Angle:  20 degrees
Digitization:  12-bits
360 pixel swath
286 spectral bands from .43 to .90 µm

Updates/Military-Intell.

Uses.  The magazine Aviation Week & Space Technology, November 23, 1998, page 56, contains a short article entitled U-2 To Get Improved Targeting Capability.  It discusses upgrades to the fleet of U-2s to “expand the types of targets the reconnaissance aircraft can see”.  The article continues later:

“However, the Multi-Sensor Agile Reconnaissance System (Mars) will try to deal with those shortfalls.  The goal is to reduce to no more than a few minutes the time it takes to transmit targeting data derived from the U-2’s signals intelligence suite, synthetic aperture radar, multispectral sensor and a new hyperspectral sensor that will be added as part of the Mars program.  Furthermore, the hyperspectral sensor—which looks at slices of the light spectrum—should allow U-2s to find hard-to-detect, high priority threats such as concealed targets and weapons of mass destruction.”

“The electro-optical hyperspectral sensor will look at approximately 300 separate frequency bands that allow it to distinguish different types of material.  Each material reflects light in only certain bands, so the optical signature detected by the sensor can be used to characterize a possible target.  That will allow the U-2 to detect nerve agents, materials used in making of weapons of mass destruction or vehicles concealed under camouflage, said ...”

“Air force planners are still determining what resolution and detection ranges [wavelength ranges] they want for the new sensor.  It is expected to be smaller, lighter and have less resolution [in other words, better resolution] than the high resolution Syers (Senior Year Electro-Optical Reconnaissance System) [already on the U-2’s] with its seven-band multispectral sensor.”

Restrictions.  The newspaper Space News, Vol. 9, No. 48, December 14-20, contains the latest update on the U.S. controversy to limit hyperspectral imagers on spacecraft in the article:  Pentagon Likely To OK Some Hyperspectral Sales.  The article deals specifically with the attempt of OrbImage to obtain a license to sell hyperspectral imagery from the 8 meter hyperspectral device they are adding to OrbView 4 commercial imaging satellite for the U.S. Air Force scheduled to launch in 2000.  Extractions from this article summarize it and the mess that U.S. industry has in trying to compete in this area.  This “limit-the-public-resolution to what other countries can do now” is very similar to the situation currently imposed on RDL for their first ever U.S. license to launch their RADAR-1 commercial radar imaging satellite.  This parallel situation in radar was reviewed in Space News, Vol. 9, No. 25, June 22-28 and entitled RDL Nabs First License For U.S. Radar Satellite.

 “But some officials in the intelligence community are fearful that the widespread availability of hyperspectral data could harm U.S. national security.  The Defense Department therefore likely will impose restrictions on the resolution of the hyperspectral data sold to non-U.S. government customers [from the OrbView 4 device].”

“The restrictions will bar commercial sales of hyperspectral data with a spatial resolution of better than 20 meters, a Pentagon source said.  Spatial resolution is a measure of sharpness.  For example, ground features 20 meters across show up in imagery with 20 meter resolution.”

“The Pentagon source said any hyperspectral date raw or processed, will have to be ‘fuzzed-up’ for sale to non-U.S. government customers.”

“U.S. companies are permitted by law to sell optical satellite imagery with 1 meter resolution.  However, hyperspectral data, although considered to be optical, has unique characteristics that put it into a different category in terms of military sensitivity.”

“The proposed restrictions on the commercial sale of hyperspectral data also will apply to Space Technology Development Corp., which is working on a hyperspectral satellite in partnership with the Navy.  However, the spatial resolution of that sensor is 30 meters, so it probably would not be affected by the spatial resolution limit.”

“The resolution restrictions could be relaxed once the hyperspectral data is better understood, the Pentagon source said, noting that some of the more conservative elements of the intelligence community wanted to bar commercial sales of hyperspectral data altogether.”

Aircraft acquired AVIRIS hyperspectral images are available to anyone for test sites around the world at 30 meter resolution.  The 1998 late-fall and 1999 early-spring, low-altitude AVIRIS joint campaign of JPL, NASA, and NOAA for contractors such as MicroImages is collecting AVIRIS hyperspectral imagery at 3 to 8 meter resolution.  These images can be downloaded from the JPL web site by anyone world-wide.  Various commercial aircraft hyperspectral devices are available for purchase from sources outside the United States with resolution capabilities which can match or better this 8 meter resolution and are being summarized in this and the V5.90 MicroImages MEMO.

Spaceborne Hyperspectral Imagers.

Almost nothing new was learned about technical aspects of plans for spacecraft based hyperspectral devices this quarter.  At the NASA hyperspectral project startup briefings in October, no other pending or future source of imagery during these two year projects was discussed other than continued operations of the AVIRIS.

ALI (Advanced Land Imager).  The main sensor on ALI is an experimental push broom spectrometer designed to test components for a possible Landsat 7 follow-on instrument.  It is to be launched in May 1999 as part of the EO-1 (Earth Observation) satellite program.  It will follow the identical orbit track of Landsat 7 a few minutes behind it.  It was to contain, among other special secondary devices, two different hyperspectral imagers.

A series of short news articles over the past four months has totally clouded the future of these particular sensors.  First it was announced that the commercial satellite vendors, OrbImage in particular, had mobilized some political support opposing this government activity, as they planned to launch an experimental hyperspectral scanner in their ORB-4 satellite.  [This sensor on ORB-4 is being subsidized by the U.S. Air Force.]

Next it was reported that neither of these hyperspectral sensors would be on ALI due to design problems.

The latest report is that NASA was attempting to buy the hyperspectral imager from the canceled CLARK satellite for ALI.  You may recall that the Lewis and Clark satellites were to be experiments in the preparation of low cost exploration satellite missions by private industries.  Lewis, with its important experimental hyperspectral image package, failed to reach orbital operation.  Clark was then canceled earlier this year, as it was way over budget.

At this point, these important public domain hyperspectral capabilities of ALI are certainly “up in the air” but definitely not in the sense we would like.

Internationalization.

The TNT products have been internationalized for several quarters [capable of being used in many languages].  There are still processes where English pops up from within some process, and these are gradually being located and fixed.  Most recent effort has concentrated on creating utilities and features for the actual creation and use of local language versions of TNTmips.

* Localization.

Introduction.

Localization kits are now available to use the TNT products in Chinese, Japanese, Russian, German, and other languages.  MicroImages’ Russian translations of the needed resource files for V6.00 can be obtained from microimages.com.  The resource files for Chinese, Japanese, and German will also subsequently be posted at the same place when available for V6.00.  A color plate entitled TNT Products—Chinese Localization is attached to illustrate this user interface.

Effort to create new localization features has been concentrated upon providing utility changes and tools to allow local languages and fonts to be more accessible and to reduce the work in maintaining translations for the current release of the TNT products.  The value of these tools provided to dealers a month ago as part of the beta releases of V6.00 are summarized in email from one of them.

“One of the first things I tried was the new localization tool.  It works great and saves lots of time in comparison to the ‘work-around’ I had been using.  Maybe you remember, for updating of the resource files I compared the versions with Textpad, import the result and both versions of the resource file to dbase, let dbase change the resource file and export to plain text format and then start translating new lines.  Really very time consuming.  The translation of the resource files is now underway.  I calculate that it would take around 40 hours to update from 5.7 to 6.0.”

Create a Locale.

The set of files needed for a locale can now be packaged into a convenient file in TNTmips via Support/Localization/Create or Update Locale.  This utility creates a single installable file for each locale (in other words, for a language) which is easier to distribute or post to a web site.  Unless a local file is encrypted, its contents can be used and altered by anyone who has it.

Encrypt a Locale.

Those dealers or clients creating translations of TNT product interfaces (the locale file) now have the option of encrypting it at Support/Localization/Encrypt Local Files.  Encrypting a locale file restricts its use to those who have obtained a password from the translator/owner who can then, if desired, sell or otherwise control its use.  However, this encryption system has been set up so that free TNTlite users (in other words, those with no-key present) can always use any encrypted or public localization file, whereas professional users require the password unless the encrypted locale file has been set up for free public use.

Update a Locale.

Those providing translations can now use a new merge utility provided via Support/Localization/Create or Update Locale to quickly identify the additional translations which are required to update their current translation (for example, from V5.90) to a newer version of the TNT products (for example, V6.00).  This merge utility updates older native language locale files by adding or substituting English where changes are required (usually about 10% of the total).  The translator can then review these new native language locale files representing the latest version and translate any English which has been added or inserted.  The result is a new, current set of locale files with less than 10% of the effort of creating the original, first translations.  Translators can also use this merge process to create their initial, untranslated local file set for translation.  Clients using an older set of locale files can also, at a minimum, use this utility to upgrade them for the current version by inserting English.

Switch Locales.

You can now use the “Locale” tab in the “Setup/Preferences” dialog to choose from available languages (locale sets).  This allows you to switch between languages within the TNT products for any locale files you have installed (for example, from Chinese to English and back).

Create a Dictionary.

A TNT dictionary utility is now available at Support/Localization/Generate Dictionary.  It should be used before starting a translation to start your own TNT and geospatial dictionary.  Use it to create a text file containing all words which must be translated in the version of the TNT products you have installed, sorted in order of frequency-of-use or alphabetically.  Use this file to study the words used in the TNT products and develop and assign their new language equivalents before starting any translation.  This will allow a rigorous technical dictionary of TNT terms and geospatial terms to be developed for the first translation and maintained for new versions as TNT releases add new terms.

Change Text Encoding.

When translations are made, they may be prepared in the TNTmips text editor or some other text editor.  When your favorite text editor is used, it may save its text files with some other encoding than the UTF8 encoding required in the TNT products.  Under these circumstances, you can now use a new utility at Support/Localiza-tion/Change Text File Encoding to make a copy of these files into the UTF8 encoding.

Printing.

Anaglyph stereo images can now be printed as part of a layout.

Map layouts can be “printed” into a PDF file and used in any Adobe Acrobat Reader.  This feature is similar to the production of a TIFF, EPS, or print file.  Maps are already widely distributed in PDF form and can now be produced by the TNT products.

Installed Sizes.

Loading a full installation of TNTmips 6.0 onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.   V6.00  in V5.90

PC using W31 87 MB 77 MB
PC using W95 75 MB 96 MB
PC using NT (Intel) 75 MB 96 MB
PC using LINUX (Intel) 77 MB 66 MB
DEC using NT (Alpha) 72 MB 97 MB
PMac using MacOS 7.6 and 8.x (PPC) 96 MB 89 MB
Hewlett Packard workstation using HPUX 106 MB 96 MB
SGI workstation via IRIX 130 MB 115 MB
Sun workstation via Solaris 1.x 93 MB 84 MB
Sun workstation via Solaris 2.x 88 MB 82 MB
IBM workstation via AIX 4.x (PPC) 117 MB 105 MB
DEC workstation via UNIX=OSF/1 (Alpha) 127 MB 120 MB

V6.00 of the HTML version of the Reference Manual including illustrations requires 35 MB.  Installing all the sample geodata sets for TNTlite and TNTmips requires 168 MB.  The 45 Getting Started Booklets require a total of 60 MB.

Upgrading.

If you did not order V6.00 of your TNTmips and wish to do so now, please contact MicroImages by FAX, phone, or email to arrange to purchase this upgrade or annual maintenance.  Upon receipt of your order and processing, MicroImages will supply you with an authorization code by return FAX only.  Entering this code when running the installation process allows you to complete the installation and immediately start to use TNTmips 6.00 and the other TNT professional products.

If you do not have annual maintenance for TNTmips, you can upgrade to V6.00 via the elective upgrade plan at the cost in the tables below.  Please remember that new features have been added to TNTmips each quarter.  Thus, the older your version of TNTmips relative to V6.00, the higher your upgrade cost will be.  As usual, there is no additional charge for the upgrade of your special peripheral support features, TNTlink, or TNTsdk that you may have added to your basic TNTmips system.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground.  (+150/each means $150 for each additional quarterly increment.)

TNTmips Product Code Price to upgrade from TNTmips:  V5.40
V5.90 V5.80 V5.70 V5.60 V5.50 and earlier
D30 to D60 (CDs) $250 450 600 750 900 +150/each
D80 $375 675 900 1050 1200 +150/each
M50 $250 450 600 750 900 +150/each
L50 $250 450 600 750 900 +150/each
U100 $450 800 1000 1200 1400 +200/each
U150 $615 1100 1450 1700 1950 +250/each
U200 $780 1400 1875 2175 2475 +300/each

For a point-of-use in all other nations with shipping by air express.  (+150/each means $150 for each additional quarterly increment.)

TNTmips Product Code Price to upgrade from TNTmips:  V5.40
V5.90 V5.80 V5.70 V5.60 V5.50 and earlier
D30 to D60 (CDs) $300 560 750 900 1050 +150/each
D80 $425 800 1050 1200 1350 +150/each
M50 $300 560 750 900 1050 +150/each
L50 $300 560 750 900 1050 +150/each
U100 $500  850 1050 1250 1450 +200/each
U150 $665 1150 1500 1750 2000 +250/each
U200 $830 1450 1925 2225 2525 +300/each

MicroImages Authorized Dealers

Two new dealers were added during the past quarter.  Any active client or anyone else interested in becoming a dealer, please contact Terry, Lee, or anyone else at MicroImages.  Inquiries are welcomed from anyone, big or small.

Canada—IMAGETECH Resource Laboratories, Inc.

MicroImages is pleased to present IMAGETECH as a new MicroImages Dealer located in Montreal, Quebec.  IMAGETECH provides Digital and Graphics Services to clients such as the Canadian Space Agency, RADARSAT Inc., Canadian Center for Remote Sensing, SPAR Aerospace, and others.  As a result, they have had increasing demand to provide expanding geospatial analysis services and products that can be completed with the TNT products.  For further information, please contact Ursula Kobel at voice (514)397-9866 or FAX at (514)397-9860 or by mail at 254 Rue Queen, Montreal, H3C 2N8, Canada.

Colorado—Common Sense AG Consulting.

MicroImages is pleased to present Common Sense AG Consulting as a new MicroImages Dealer located in Loveland, Colorado.  John Rodowca has been an active user of TNTmips as the precision farming research coordinator of the Wilbur Ellis international agrochemical company (~1 billion gross revenue).  During 1998, John left Wilbur Ellis and formed this consulting firm to specialize in application of geospatial analysis in agriculture and related industries.  John has many years of experience promoting the adoption of agricultural technology in the large international agricultural corporations.  As a result, he will specialize in providing consulting services to assist in the proper corporate institutional adoption of the geospatial analysis procedures in precision farming, agricultural insurance adjusting, and related areas.  For further information please contact John or Dee Rodowca at voice/FAX at (970)622-8618 or by mail at 2708 West 29th Street, Loveland, CO  80538, USA.

Discontinued Dealers

The following dealer is no longer authorized to sell MicroImages products for varying reasons.  Please do not contact them regarding support, service, or information about the TNT products.  Please contact MicroImages directly or one of the other MicroImages Authorized Dealers.

Brydun Geomatics.  (Jack Henry) of Whitecourt, Alberta, Canada is discontinued.

Computers

This continues a string of recommendations for increasingly powerful Gateway computers suitable for running the TNT products for about $3500.  This Gateway “top-of-the-line” desktop computer is now a 450 MHz Pentium II with a 16 GB hard drive and a read/write/erase CD drive and 8 MB of display memory (last quarter it was a 400 MHz Pentium II with 10 GB drive and 4 MB of display memory).

Top Power for the Price.

Gateway G6-450XL ($3450)

  • Intel 450 MHz Pentium II

  • 128 MB SRAM

  • 512 KB internal cache

  • 16 GB 9.5 ms ultra ATA hard drive

  • 19" EV900 color  monitor (.26 dp)

  • AGP display board with 8MB memory

  • 2X DVD-ROM drive with MPEG2 Decoder

  • Philips CD-RW CD-Rewritable Drive

  • 3.5" diskette drive

  • TV/FM tuner card

  • Soundblaster sound card and 3-piece speaker system

  • 56K modem

  • Tower case

  • Keyboard and MS Intellimouse

  • W98, MS Office 97 (w/o Access)

PC Magazine Recommendation.

The following Gateway configuration was picked as the top editors’ choice from 29 brands reviewed in PC Magazine, December 1, 1998.

Gateway G6-450 ($2632)

  • Intel 450 MHz Pentium II

  • 128 MB SRAM

  • 512 KB internal cache

  • 9.6 GB 9.5 ms ultra ATA hard drive

  • 19" EV900 color  monitor (.26 dp)

  • STB Velocity 4400 AGP display board with 16 MB of SDRAM memory

  • 2X DVD-ROM drive with MPEG2 Decoder

  • Philips CD-RW CD-Rewritable Drive

  • 3.5" diskette drive

  • Soundblaster sound card and 3-piece speaker system

  • 56K modem

  • Keyboard and MS Intellimouse

  • W98, MS Office 97 (w/o Access)

Web Site

Traffic.

MicroImages has compiled information on the FTP transfers from the microimages.com web site over the past year.  On the average, based on 7 days a week, 24 hours per day, microimages.com FTPs 1.5 GB per day.  This is about 10% of the maximum daily capacity of the T1 line connecting it to the Internet.

Movie Gallery.  

/featupd/v59/mpeg/

The MicroImages web master established a movie gallery at microimages.com where simulations produced by TNTmips are provided for your downloading, examination, and testing.  A snapshot frame is provided to acquaint you with the general content of each movie.  Since 3D simulation is currently an actively evolving process in TNTmips, these movies are being periodically remade to incorporate new features, enhancements, and quality.  Each movie is clearly dated with its production date.  Please come back to this gallery periodically and check out these movies to make sure that you are not continuing to use an older production or are missing the latest new examples.

MicroImages is continually being introduced to materials prepared by clients with the TNT products which far surpass the sample materials we, as software developers, can take time to prepare.  This is the general mode of our association, we provide the software technology and you show us what it is good for in your areas of expertise.  Thus, we can assume in advance that you will also produce movies which surpass those sample and test efforts produced by the MicroImages staff.  Please provide samples of your movie simulations to MicroImages so that they can be published on microimages.com and shared with others.

General Alterations.

MicroImages’ site also has extended, altered, or improved in various other areas.

Screenshot Gallery.

 /gallery/

This gallery is updated almost every week.  It provides screenshots and brief descriptions of “in-house” projects ranging from hyperspectral analyses and geoformulas in action to simple georeferencing.

VRML Gallery.

/gallery/vrmls/

Provides VRML world created by export from TNTmips.

What’s New Page.

/announce

This page was available previously but is now being kept up-to-date frequently.  For example, it has been providing status information on the preparation steps in V6.00.

Published Reviews Gallery.

/reviews/

This page presents the published reviews of the TNT products.

Promo Page

/promo/

Posters, fliers, and other MicroImages promotional materials can be downloaded from this page in PageMaker format for reprinting.

Patches.

The system for patches has been expanded.  The final patches for V5.90 are assembled and posted.  The patch support for V6.00 is also provided.

Miscellaneous.

/downloads/gvim

The configuration files to improve the widely used VI editor are posted.  This editor is often used to write SML scripts.

Prices

There have been no price changes in the TNT professional products for this quarter.

Exchanging Licenses.

A number of clients are switching from single user licenses to floating and multiple user licenses.  Please remember that when such a change is made, only a single existing license can be traded in for credit as part of such an upgrade.

MicroImages is quite liberal in its exchange policy in comparison to our competitors.  Anytime you wish to exchange a single TNT product for some other higher priced TNT product, you can receive full credit for the amount paid for that original MicroImages product—for example, when a single-user, single-processor license is upgraded to a floating license.  However, only one TNT product can be exchanged for a single new product.  An exchange can also be made to a less expensive product, but no credit or refund will be issued for the difference.  If you have any questions regarding how you can apply full credit for your existing product, such as a multiple-user, single-processor license, simply contact MicroImages.

MicroImages will register and provide upgrades for a system that is sold from one client to another.  However, to protect the interests of the new buyer and MicroImages, the system must be the current version and the seller, the previous owner, must sign a form provided by MicroImages indicating that the transfer is being made.

TNTlite.

Individual CDs for TNTlite 6.0 are now available at the following prices:

Individual CDs will be shipped anywhere in the world for $10 prepaid, which includes shipping costs by airmail only.

100 CDs can be ordered all at one time for $300 plus shipping by the method you specify.

100 CDs can be ordered before the reproduction run of V6.10 for $200 plus shipping by the method you specify (can be shipped cheaply with your upgrade).

The price of the TNTlite kit containing printed versions of all 45 booklets (1000 pages) is now increased from $40 to $50, including shipping by airmail only, anywhere in the world.  This increase reflects the increased costs of shipping the additional printed booklets which have been added in the past several quarters.

Papers on Applications

* TNT Reviews.

The following reviews of TNTlite 5.7 and TNTlite 6.0 [an early beta version] have been recently published.  Copies of both of these very favorable reviews are enclosed.

A Software Review:  MicroImages TNTlite Version 6.0.  by Ray L. Harris, Jr., Geographic Information Systems Engineer.  In Photogrammetric Engineering and Remote Sensing.  November 1998.  pp 1049-1053.

Software Review.  by  Art Busbey, Department of Geology, Texas Christian University  In Geotimes.  April 1998.  page 42.

Rewarded Papers.

The following papers qualified for dollar rewards.

Precision Ranching:  West Texas rancher develops high-tech approach to control mesquite.  by Kevin Corbley.  Modern Agriculture, Vol. 1, Issue 7, Fall 1998.  pp 13-15.

Creating Good Management Zones:  How to Capitalize From Flexible Data Integration.  by Kevin Royal.  Modern Agriculture, Vol. 1, Issue 7, Fall 1998.  pp 26-28.

[This paper by Kevin is posted on microimages.com and will be mailed in printed form upon request.]

Other Papers Referencing TNTmips.

Dealing with the Past:  Mapping and cleaning up 40 years of cast-offs at McMurdo Station.  by C.K. Bretz, P.J. Iampietro, and B.G. Kvitek.  EOM, September 1998.  pp 11-13.

MicroImages TNTlite Version 6.0, a Software Review.  by Ray L. Harris, Jr.  In Photogrammetric Engineering & Remote Sensing,  Nov. 1998. Volume 64, No. 11. pp 1049-1053.

Forestry Management with GIS:  Industry Taps Image Processing and GIS Earn Green Certification.  by Robert Kolosvary and Kevin P. Corbley.  GIM International, Vol. 12, Number 8, August 1998.  pp 27-29.

Managing Biodiversity:  GeoTechnologies assist Amazon oil exploration impact study.  by Fred H. Green.  EOM, November 1998.  pp 12-15.

Project Workbook

Dr. Jack F. Paris has used TNTmips extensively for more than 10 years in teaching and research and has many units.  As an example, a current proposal is pending to expand his academic classroom facilities from their current 20 TNTmips to a total of 40 stations.  Jack has just published a new TNT Project Workbook of more than 180 pages ($45) for instructional use.  It takes its reader through a series of exercises “From Start to Finish” of a geospatial project.  The TNTlite geodata sets needed to complete these exercises are also provided on an accompanying CD ($5).  A flier from Dr. Paris is enclosed with more details on his book and a coupon which can be used to order it.  Please return this order coupon to Dr. Paris and not to MicroImages.

Reference Book

The following book is a very good technical reference to add to your bookshelf.  It contains several good sections on hyperspectral imaging concepts.

Remote Sensing:  Models and Methods for Image Processing.  by Robert A. Schowengerdt.  1997.  Academic Press.  522 pages.

Promotional Activities

New TNTlite Carrier Card.

The TNTlite 6.0 CD is now attached to a new, folded, updated delivery card with revised instructions and descriptive material.  A copy of this card is enclosed, and it is used with the thousands of V6.00 CDs being shipped.

Posters.

A variety of new promotional posters is included with your V6.00 shipment.  These posters, earlier posters, and the TNTlite flier are all posted at microimages.com in PageMaker 6.5 format for printing at any size up to the 30 by 40" size for which they were designed.  Dealers and anyone else can download these materials for printing on any color printer.

Easy Go.

Promotes the savings of using the one-product-does-it-all idea of the TNT products.

Deliver the World with TNTatlas.

Promotes the free use of TNTatlas for delivering integrated geospatial data sets.

Put Down The Toy Shovel.

Stresses that the wrong tools make simple tasks difficult.

All the Spectral Bands.

Promotes the depth of the features being provided free for the analysis of hyperspectral images.

Ready! Set! wait.

Urges using software which is kept up-to-date.

Visualize Your World.

A color general purpose poster.

Powerful!

Simply promotes TNTlite 6.0.

A True Story.

Reviews the software one company is using in place of TNTmips.

TNT products.

Promotes geospatial analysis tools that grow with you.

APPLIDAT:  Workshop on Your desktop.

Promotes the use of APPLIDATs to “glue” it all together for end users.

U.S. National Parks:  Death Valley, NV.

Provides a colorful poster combining various geodata elements laid out in TNTmips.

TNTmips:  there’s no limit to your horizons.

A color poster created in TNTmips.

Yellowstone & Grand Teton National Parks.

A color poster created in TNTmips.

Noteworthy Client Activities

MicroImages does not contract for projects using the TNT products.  In this fashion, we have avoided competing with our dealers and clients.  However, this also means that we are not gaining from the experience of using geospatial analysis on complex projects.  It is therefore very important that you communicate with us about your experiences, both pro and con.  It helps us greatly if you send examples of your geospatial analysis intermediate results so that we can analyze what you and we may be doing right and wrong.  Many important new features have been added to the TNT products only after examining the results and problems experienced in producing sample products sent to us.  The following summaries result from information about projects which was provided to MicroImages for examination.

Australia.

Southern Remote Sensing, a MicroImages Authorized Dealer in Australia, publishes a quarterly newsletter.  The current issue is SRS Quarterly Vol. 2, #4.  Richard DuRieu would be happy to add anyone to his distribution list.  Simply send an email request to srs@ozemail.com.au to be placed on his subscription list.

Finland.

Citymodel.  Soil and Water, a MicroImages Authorized Dealer in Finland, has created a new product called citymodel for sale to the telecommunication and network industries.  A color plate illustrating and describing this product is enclosed.  The title of the plate is Citymodel of Jyväskylä in Central Finland created at Soil and Water Ltd. using TNTmips.  Contact Pentti Ruokokoski for any additional information on this product beyond that shown on the back of the plate.  Pentti has just sent MicroImages an orbit simulation movie of this same citymodel produced in this new feature in TNTmips V6.00.  This movie orbits around the approximate center of the citymodel showing all the 3D buildings moving with respect to each other.  Permission is being requested from Soil and Water to permit downloading of this early result, 22 MB movie from the movie gallery at microimages.com.

Flood Plain Maps.  Soil and Water has been using a combination of TNTmips and a stereoplotter system to produce .5 meter elevation DEMs and contour maps.  These printed and electronic map sheets represent an area in central Finland of low relief ranging from only 83 meters to 100 meters above sea level.  The stereoplotter is used to collect about a million elevation points scattered uniformly over each map area from stereo photo pairs.  Since the surface elevation in these areas ranges only a few tens of meters, simply importing and surface fitting these points in TNTmips does not preserve important barriers to flooding.  Thus, the plotter is used a second time to sample 50,000 elevation points along all lines and points of inflection in the subtle topography such as the steam course, both edges of the stream bank, tops of knolls, edges of road embankments, and so on.  These points were imported from database files in sequential order into individual 3D vector lines using their type identification code.  These 3D vectors were selected as breaklines and are inserted into the TIN created from the million surface area points.  The DEM and contour map produced from this new TIN clearly reflect all the subtle topographic details.  Even the very low gradient stream courses run down hill as this was insured by their accuracy in the digitizing of the stream line and preserved exactly through the breakline superposition into the surface TIN and surface fitting.

V5.90 included a color plate describing a large project completed for the Finnish-Russian Offshore Technology Working Group by Soil and Water which used TNTmips to assemble a geospatial database for the Pechora Sea in the South-Eastern Barents Sea.  Another larger, similar geodatabase is now being prepared for a much larger area containing the Pechora Sea.

Egypt.

A German consulting firm, with the assistance of Focus, the MicroImages dealer in Cairo, are using TNTmips’ networking and other features to design and/or relocate school districts across Egypt.  Under this contract, TNTmips systems are being installed in various locations and Egyptians trained in their use.  This contract is sponsored by GTZ, one of the German government’s aid agencies.

Australia.

Geo Mapping Technologies, a MicroImages Authorized Dealer in Australia, recently was selected as one of six corporations authorized to produce government financed, official DEMs and orthophotos quads (DOQ) of Australia.  The topographic maps are reduced to vector form using a combination of MicroStation and TNTmips.  The vector objects which result are transformed to the DEM using TNTmips, which is then used in the resection process to produce the final orthophoto.

Canada.

Brydun Geomatics is using TNTmips to routinely produce orthophoto maps for their clients involved in timber management in Western Canada.

Japan.

The Geological Survey of Japan is preparing another TNTatlas using a Hybrid CD with both the Windows and Mac versions of TNTatlas and a shared collection of Project Files.  More information on this CD will be provided in the next MEMO.

Appendix A: Executive Summary of Hyperspectral Project

California’s Monterey County is a leading producer of high value crops, with gross production revenues exceeding $2 billion in 1997.  Chief among these in terms of total value and acreage is lettuce.  Lettuce production is a nitrogen (N) fertilizer-intensive operation, with growers typically applying ~250 pounds of N per acre to assure high yields and quality.  It is estimated that this amount exceeds true crop demand by a factor of at least two.  If growers had perfect information on N demand and the capability to efficiently tailor N inputs to just meet demand, resulting savings in production costs, just for lettuce just for Monterey County, would run some $30 million annually.

The Monterey County Water Resources Agency (MCWRA) has identified runoff and leaching of excess N from nonpoint (agricultural fields) and point sources to surface- and ground-water as a key environmental and human health concern in the Salinas Valley.  Nitrate contamination (non-compliance with the U.S. Environmental Projection Agency drinking water Standard) has significantly reduced the amount of water resources available for beneficial use as drinking and agricultural water in the Valley.  Some municipal wells have been closed as a result, and in at least two instances, communities have been completely deprived of municipal drinking supply.  In response, the MCWRA recently formed a Nitrate Management Program and Nitrate Technical Advisory Committee to, among other things, conduct demonstration, outreach and education activities to encourage implementation of nitrate management practices in agriculture.

Clearly, there are compelling economic and environmental benefits to the development of tools that provide growers with improved information on N demand.  We propose to investigate the use of hyperspectral imagery, AVIRIS in particular, to identify methods of N stress detection in lettuce and other N-intensive crops, using Monterey County as a case study.  In the longer term, improved information on spatial and temporal patterns of N demand may provide a basis for reduced N application (and attendant waste) while maintaining crop yields and quality.

The proposed effort will build on recent collaboration between Investigators and Dole Fresh Vegetables (Salinas CA), the world's largest harvester, marketer and distributor of fresh vegetables and fresh-cut salad.  That study identified spectral differences in foliage of lettuce plants grown in the greenhouse under different N regimes.  Use of AVIRIS data under this NRA will allow us to examine the way in which foliar spectral effects scale to field level under various N regimes (trials) established by collaborator Dole Vegetables and other major Monterey County growers.  Collaborator Micro-Images, Inc., a major geospatial software firm, will develop or improve hyperspectral data processing tools specifically for agricultural information extraction by agricultural decision-makers.  These software tools will be furnished free of charge to any interested party.

Project success will be defined in terms of demonstrated effectiveness of the sensor/algorithm combinations in detecting N stress, end-user economic benefit, end-user acceptance of project-developed commercial software, and extent of outreach to agribusiness and regulatory agencies.  Key longer-term measures include the eventual extent of industry adoption, and mitigation of nitrate concentrations in water supplies.

Significant cofunding is offered in the form of personnel time, computation resources, and establishment of N trials.  Computational resources available to the project will include the Spatial Imaging Visualization Analysis Center at California State University, Monterey Bay and the Computational Laboratory of the Ecosystem Science & Technology Branch, NASA/Ames Research Center.  The budget will be administered by the Foundation of the California State University at Monterey Bay.

Appendix B: Letter of Commitment for Precision Ranching Proposal  

NASA reference NRA-98-OES-09

Proposal Title:  Connecting NASA’s Earth-Science-Enterprise Space Assets to Resource Management Needs in Precision Range and Regional Agriculture.

The following was MicroImages’ Letter of Commitment submitted with this proposal.

Promising Early Start.

Approximately 35 years ago, a very small group of professionals began the definition of remote sensing.  At that time, declassified military materials and sensor systems became available to allow wider horizons than those addressed by conventional photointerpretation and photogrammetry.  At that time, those of us with natural resources backgrounds who had the opportunity to work with these original declassified thermal images and the images of the first multispectral scanner became immediately interested in their applications to natural grasslands.

In those early days, the use of remote sensing in rangeland management was of particular interest for a number of reasons.  1) Rangeland managers had large remote areas to manage.  2) The resolution of the first aircraft imaging systems (thermal and multispectral scanners) were suitable for rangeland applications but not adequate for agricultural, urban, military, and other application areas.  In fact, we had no way to handle higher resolution materials of large grassland areas.  3) The images produced from the very first, crude multispectral scanner (which I was responsible for interpreting) showed meaningful spatial variations in grassland areas.  4) Ranchers, BLM, USFS, and other grassland managers had less stringent timeliness requirements in the delivery of images than in agriculture.

Years of Disillusionment.

Over the intervening years, NASA has spent many tens of millions of dollars underwriting efforts to establish the utility of remote sensing materials in rangeland management.  I know, I have spent some of them.  This effort has had considerable success in some areas, such as the use of remote sensing of grassland materials in the management of Federal grasslands (Landsat for BLM, USFS, ...) and in global ecology (AVHRR for desertification, ...).  The United States does have large areas of federally managed grasslands to experiment on.  But, on the average, the most valuable rangelands are in private ranches, not in the lands held in trust by our federal agencies.  Furthermore, in almost all other nations, the management of rangelands is in the hands of private individuals and organizations ranging from herdsmen and tribes to estancias to cooperatives to sheep stations.

My qualifications to make such a broad statement will be outlined below in how, over those intervening 35 years, I have sought to contribute to the research and development of application of remote sensing, and more recently GIS techniques, to rangeland management.  I feel I have some basis for the claim that all of our efforts have had almost zero impact on worldwide grassland management practices on these private ranches in 35 years.  This is especially tragic, as Landsat image spatial and spectral resolution has been more adequate since the successful launch of the first Landsat.

A Case Study.

A brief discussion of one “early adopter” rancher will illustrate the situation of the most recent years.  MicroImages has been in the business of supplying desktop image processing and geospatial analysis tools for 12 years.  Bert Wallace, owner of the large Peace Pipe Ranch in Texas, purchased one of our first PC based systems over 11 years ago and has updated and used it and the associated equipment to the present.  Bert is a collaborator in this project, and details on some of his uses of remote sensing in his ranch management can be found in an article in BEEF magazine (reproduced in the Appendix of this proposal) and to be reprinted in the next issue of the precision farming magazine: Modern Agricultural.  Over these years, Bert has invested more and more time and money into his remote sensing skills, equipment, and software.  Due to the lack of readily available images, he has had to contract directly with SPOT image and with airphoto firms to collect imagery of his three ranch units.

Bert has continued to work at this complex technology over this decade as “its use makes money for me”.  Yet, not a single other neighboring rancher in his immediate area (or in Texas), all of whom have similar management needs, has ever followed up on Bert’s success.  In the several times he has approached them, the Texas agricultural extension agencies have expressed little interest in his practical approach to large ranch management with remote sensing.  Not a single party has inquired about the methods he uses based on the article in the BEEF magazine which is read by over 100,000 beef growers.  I believe that if you consulted with the several other firms engaged in selling software for remote sensing applications (ERDAS, ENVI, PCI, and ERMapper), you would find a similar situation.  Many new, lower cost special purpose software systems are now available for applications in precision farming, yet the term “precision ranching” has not even been seen in print except in the enclosed BEEF article.

Why, after all of this effort and years do I have this sense of failure, which I think may be shared by others with years of experience in this area?  Clearly, all the necessary initial research is done.  Clearly, the results at the Peace Pipe Ranch and a few other early adopters show that the proper application of remote sensing technologies is cost effective.  Clearly, private ranchers are in business to make money.  The early adopters in this field have proven that using remote sensing and related geospatial techniques add profit to the bottom line.  Why then, are we still stuck after many years trying to move beyond these very few early adopters?  I know that I cannot sell our complex, state-of-the-art geospatial analysis software products to ranchers!

What is the Problem?

Ranchers are about as conservative as it gets in agribusiness.  They have to be to stay in business when the loss of a few head, a drought, fire, small drop in the price of beef, changing in regulations controlling feedlots, ... make the difference in keeping the family ranch or losing it.  They have always and continue to face two interacting obstacles to using remote sensing in precision ranching.  Those of us who have worked with them for many years are well aware of these impediments as follows.

1) WRONG IMAGE MODEL!  There has not been a readily available supply of appropriate remote sensing imagery at an affordable price.  Applications in ranching were effectively removed from the remote sensing equation when the Landsat program was prematurely turned over to private industry!  This priced the images out of range for practical ranch management (whole scenes had to be purchased, making the cost per ranch acre prohibitive).  Until the advent of the wide use of the Internet and CD-ROMs, no economical distribution mechanism existed (images are still distributed in complicated changing formats—at least from the ranchers’ viewpoint).  Sales of this private imagery into the ranch industry were poor, discouraging further commercial efforts to address the special needs of this market.

Now image high costs and availability and related impediments are being removed:  Landsat 7, MODIS, Kodak CIR digital camera, lower cost multiband cameras, and pending remote sensing constellations.  Commercial entities such as Resource 21 (Boeing) and TASC are discovering that they can charge for only the delivery of local image segments as well as adjust the prices according to the value per acre received by the buyer.  Internet map servers provide a way of selling and delivering such images in a timely and cost effective manner.

2) WRONG ANALYSIS TOOLS!  Simple tools do not exist  at the ranch to extract management information from remotely sensed images and other local related sources of spatial information.  Only the ranch owners at their desks can combine the information extracted from remote sensing (canopy cover, canopy biomass, ...), the stored geospatial information (soil maps, previous treatments—rotations, nutrient supplementation, ...), the availability of capital, the current and projected price of beef, ...  into a management decision of buy, sell, spray, hold over calves, and so on.  They must do the final steps in the analysis of the spatial information in the context of the myriad of other factors that enter into operating a profitable ranch!  Only they can draw the boundaries around the current pasture units, determine that two pastures have an open gate and are currently functioning as one, calibrate the images during analysis with field samples of biomass, percent cover, or other qualitative operations, overlay previous management practices such as spraying or other improvements, and so on.  You can take the remote sensing out of ranching, but you cannot take the rancher out of successful management using remote sensing.

Use A New Model.

It is not possible to coach most ranchers into using complex computer tools for the analysis and application of remote sensing and GIS materials in day-to-day management.  Their background, previous experiences, and more than full-time job operating a large ranch, often single-handedly, preclude such an approach.  Precision ranching must be accomplished by easily accessed images and other map information combined with simple to use tools yielding directly usable management maps and summary information.  MicroImages has shown via prototypes that a combination of local remote sensing imagery and analysis software can be combined and shipped to a farmer for immediate use.  Please see descriptive material on APPLIDATs elsewhere in the Appendix of this proposal.  The sample APPLIDATs created to date show that they can be very simple to operate without any previous training (they coach the user through the process), allow the input of the farmer’s local information (for example, draw around the field), and produce a result readily used in the next steps in precision farming (for example, vector maps to control pesticide applications).  It is my belief that this same approach can be even more successfully applied to overcome the two major impediments detailed above to define a new precision approach to cattle ranching which not only makes money, but provides improved land stewardship.

The collaborators and test sites in this project have already all competed successful applications of remote sensing in grassland and ranch management.  All have at least 10 to 20 years of experience in determining how remote sensing images can be applied in rangeland management.  MicroImages, in this project, will design a series of APPLIDATs that will implement the applications these collaborators have already developed into simple procedures which can be easily “discovered” by a rancher without previous training.  Using MicroImages’ TNTmips software, the NASA images acquired for these sites will be processed into usable form (for example, Landsat 7 processed from level 0 to higher levels, accurately georeferenced, multidate registered, ...).

On January 1, 1999, MicroImages will release a web based TNTatlas similar to our CD-ROM product which can manage and serve up all or portions of a variety of images, GIS overlays, maps, ... on demand for a specific geographic area for use in JAVA plug-ins to Microsoft Explorer and Netscape.  As part of this project, MicroImages will expand this TNTatlas server to communicate with ranchers via the network.  Using JAVA plug-ins, they can select an APPLIDAT from the server and define an area of interest such as a pasture unit.  The server will then extract the appropriate image(s), prepare them into proper form, integrate them into the APPLIDAT, and then automatically download it as a single file into the rangeland manager’s computer.  They will then simply select the icon for the APPLIDAT and proceed with the precision ranching application.  When this system is in place, the collaborators will each invite several interested grassland managers whose management areas are covered test site images provided by this project to test and evaluate these concepts.

Past grassland remote sensing research and development experience of  Lee D. Miller, President, MicroImages

Over the past 35 years, I have funded and conducted research as principal investigator in research and applications in a number of areas of remote sensing of natural resources including agriculture, range, forestry, watershed management, hydrology, and wildlife.  When supporting numerous graduate students, it is always necessary to find their financial support and operating funds from a variety of disciplines, research projects, and sponsors.  Although my Ph.D. is in forestry, over the years there has been some continuity of effort on my part to address the application of remote sensing to grassland management.  I suppose that this is due to beginning my academic career at Colorado State University where there has always been a major interest in the grassland biome.  It is also true that grasslands are the easiest terrestrial ecosystem in which to apply remote sensing.  With your indulgence, I would like to review my background which leads up to the commitment of MicroImages, my company, to this development project in formulating precision ranching.

I first addressed the use of remote sensing imagery to grassland applications as a portion of my Ph.D. thesis research 35 years ago.  This thesis applied some of the first declassified thermal scanner images to the study of the ecology of steaming and warm ground in Yellowstone National Park.  Trees do not grow on these areas of high geothermal flux, and grasses became indicator species for the various ecological zones of these unique sites and their role in the ecology of the Park.  Larger land animals utilize these “warm” sites and their exposed grasses during periods of heavy snow pack.

Upon graduation, and as a faculty member in natural resources at Colorado State University, a $300,000, 3.5 year project was completed under the NSF Grassland Biome Program.  This project developed a field spectrometer for the study of in-situ measurements of the radiance and reflectance of natural grasslands.  These measurements provided the scientific basis and proof for the development of the biomass indices used widely today in multispectral remote sensing applications to grassland and agricultural resources.  Several 2 and 3 band radiometers were developed and tested to directly measure biomass by exploiting these spectral bands and equations.  Aerial multispectral imagery was processed to provide calibrated green canopy biomass of several grassland biome test sites.

Another larger contract of $800,000 from NASA entitled “modeling energy flow and nutrient cycling in the natural semiarid grassland ecosystems with the aid of Thematic Mapper imagery” was also completed as principal investigator at Texas A&M University.  This project was an attempt at operational applications of TM images at various rangeland sites in the west.  It continued to support my further involvement in rangeland remote sensing and convinced me that the rancher and range manager must be an integral part of any practical program.

It became clear that practical incorporation of the rancher into the remote sensing application required that they have a means on their desktop to view and analyze imagery of their ranch.  Thus, while at Texas A&M University, a $70,000, 2 year project was completed for the University Land Management Program.  A department of this Austin group manages the surface grazing rights on the oil land grant properties of Texas and Texas A&M Universities.  Their project provided the funds to build the first early model of a CPM and Z80 based prototype of a desktop image processing system for use in the management of ranches in Texas.  This was the beginning of my dedication of 20 years of microcomputer software development within TAMU and the University of Nebraska.  This university research flowed directly into MicroImages, this project, and the doorstep of the final achievement of the goals of that project:  practical routine applications of remote sensing materials in daily ranch management—precision ranching.

A two year sojourn at NASA/GSFC as a Senior Visiting Scientist provides even further impetus to my interest in image processing on personal computers.  Subsequently, the desktop image processing development work was also funded by grants of $50,000 from IBM and $50,000 from NASA at the University of Nebraska where I concluded my academic career as Research Professor.

I was fortunate to have initial start up funding to organize MicroImages, Inc. via a Phase 1, Small Business Initiation Grant of $70,000 from NASA which was administered via the Stennis Space Center.  For the past 12 years, the excellent staff at MicroImages has developed, marketed, and supported progressively more complex desktop computer image analysis systems.  More recently, we have expanded this to include integrated GIS capabilities yielding a general system for geospatial analysis now used in 130 nations in all sorts of applications.  Unfortunately, in such a competitive business, forces are constantly brought to bear to “compete” to be the best system in the world.  This can also lead to having many features for many masters and lead to software used primarily by remote sensing experts.  MicroImages is now finally rated at the top of the competition (see enclosed reprint reviewing IPS systems).  As a result, we have recently been able to devote some time to how we can get resource managers, not remote sensing experts, using these complex concepts without really knowing it.  This has resulted in such products as our TNTview, TNTatlas, APPLIDATs, and our pending atlas web server and browser plug-in products.

All rangeland collaborators in this project from TX, NV, ND, CA, and Australia have long ago purchased MicroImages’ commercial products and routinely use them today in connection with rangeland management or research.  With these and new simple end-user oriented software tools, inexpensive images, the Internet, and the collective past experience of all the collaborators in grassland remote sensing, it is finally possible to bring it all together.  Therefore, I believe that this project will break through, resolve the final problems and demonstrate that ranchers and other range managers will accept, use, and make money with cost effective access to remote sensing imagery and the proper tools to use it.

Selected Pertinent Publications. (selected from >100).

[A list of pertinent published papers followed on my Grasslands Research and Microcomputer Image Processing.]

Appendix C:  Abbreviations

For simplicity, the following abbreviations were used in this MEMO:

W31 = Microsoft Windows 3.1 or 3.11.

W95 = Microsoft Windows 95.

W98 = Microsoft Windows 98.

W2000 = Microsoft Windows 2000, which is the new generic name for what has been called NT 5.0 up until recently.

NT or NT4 = Microsoft NT 3.1, 3.5, or 4.0 (3.1 is error prone and thus the TNT products require the use of 3.5 and its subsequent patches).

Mac = Apple Macintosh using the 68xxx Motorola processor and MacOS 6.x or 7.x.

PMac or Power Mac = Apple Macintosh using the 60x Motorola PowerPC processor and MacOS 7.x or 8.0.

MI/X = MicroImages’ X server for Mac and PC microcomputer platform and operating system.

HS = Hyperspectral images or imagery.  This is imagery simultaneously collected in at least 25 or 30 spectral bands.

25 March 2009

page update: 5 Jan 12


Back Home ©MicroImages, Inc. 2013 Published in the United States of America
11th Floor - Sharp Tower, 206 South 13th Street, Lincoln NE 68508-2010   USA
Business & Sales: (402)477-9554  Support: (402)477-9562  Fax: (402)477-9559
Business info@microimages.com  Support support@microimages.com  Web webmaster@microimages.com