home products news downloads documentation support gallery online maps resellers search
TNTmips

HOME

PROFESSIONAL
  TNTmips
  TNTedit
  TNTview
  TNTsdk
  Prices
  How To Order

CONTACT MI
  Resellers
  Consultants
  MicroImages
  About MI
  Visiting
  Prices
  Send Email
  Reseller Resources

SHOWROOM
  Gallery
  Technical Guides
  New Features
  Testimonials
  Reviews
  World Languages

FREE PRODUCTS
  TNTmips Free
  TNTatlas

  MI/X
  FAQ

DOCUMENTATION

SCRIPTING

SITE MAP

 

Release Notes in PDF format...

TNT products V5.90
June 1998

Table of Contents

Accompanying New Feature Illustrations


Introduction

MicroImages is pleased to distribute V5.90 of the TNT products and the 44th release of TNTmips. Three new processes are being introduced in prototype form: 1) moving views called 3D simulations, 2) road network analysis, and 3) hyperspectral image analysis. The concept of self contained, easily created products called APPLIDATs (APPLIcations plus DATa) is being introduced.

The following processes have had major features added:

• Image Classification: Introduces a new training set editor into the supervised multispectral image classification process.

• Editing: Two new vector object types can be created, edited, and displayed to provide new geospatial capabilities such as 3D vector elements (vertices with Z values).

SML: Rapid expansion continues with the addition of 237 new functions; introduction of 70 object oriented programming classes; run scripts from a desktop icon; use region and drawing tools; create dialog boxes; read GPS positions; provide access to and from database objects; and encrypt scripts.

Five new Getting Started tutorial booklets are shipping in printed format. All 38 Getting Started booklets which have been produced, including several earlier booklets which have been updated with revisions, are included on the V5.90 CD in PDF format.

A count of 282 new feature requests submitted by clients and MicroImages' staff were implemented in various V5.90 processes.

Advanced Users' Workshop

MicroImages will host the 10th Advanced Users' Workshop in Lincoln, Nebraska over four days (Tuesday through Friday), 19 to 22 January 1999. Set this week aside if you plan on attending. Additional material on this workshop will be mailed to you later in a separate mailing

Summary of New Features

Details on all the following and other new features in V5.90 can be found in their expanded description in detailed sections of this MicroImages MEMO.

Windows 98. V5.90 of the TNT products support Microsoft Windows 98 including multiple screens.

Visualization. A view-in-view window can be moved about and resized to compare inside and outside layers. Simultaneously view DataTips for as many layers as desired with prefix and suffix identifiers. Automatically scroll to keep GPS position in the view. Zoom level, repositioning, and similar actions can be undone through 10 steps. Zoom to the extents of the active layer or any other layer. Directly display CMY or CMYK layers. Redraw any single group on a layout.

The overall transparency can be defined for each raster layer in a view. Transparency or the cells in a raster can be controlled pixel by pixel by using a coregistered 8-bit mask. The histograms of raster areas selected regions can be viewed.

Rasters of 16-bit can now have colormaps. Colormaps can be created and edited for 16-bit rasters in RGB, HIS, HBS, CMY, and CMYK modes with transparency control for each entry.

Highlight all "attached", "unattached", or "multiply-attached" elements in a given table. Save georeference information automatically for each snapshot.

3D Simulation. Draw a path on a complex 2D view, set viewing parameters, and create a 3D simulation in another window of any or all of the layers. Compute a movie of this simulation in MPEG format, and play it back in real time via a browser.

Import/Export. Import AVIRIS hyperspectral images, CMYK TIFF rasters, and CCRS Landsat and TM images. Export vectors to SDTS with attributes, 16-bit SDTS rasters, and georeference for ERMapper.

Classification. Use a powerful new training set editor to create, modify, and test training sets for use in supervised classification procedures. .Use raster and polygon or point vector elements for defining training sets. For example, point elements logged in the field can become circular training sets with radii set from their attributes. Draw a raster mask to define any complex sub-portion of the input images to be classified. Use new error matrix (confusion matrix) to evaluate results and combine training sets, set color, label, and so on with a convenient, interactive tabular approach. Apriori probabilities can be set for each category and an unknown class set for maximum likelihood classification.

Hyperspectral. A new hyperspectral analysis process is included with integrated spectral libraries. Tools are provided to combine curves in libraries or build new libraries: average, resample, divide, subtract, add, maximum, minimum, and difference. Additional tools edit curves in libraries: set a value, add, subtract, multiply, interpolate, smooth, normalize, and compute derivative. Perform spectral curve analysis: Remove Continuum and Spectral Feature Fitting.

Import AVIRIS Images and apply calibrations: Equal Area Normalization, Log Residuals, Additive Offset Calibration, and Flat Filed Correction. Then analyze the images: Spectral Angle Mapper, Cross Correlation, Linear Spectral Unmixing, and Matched Filtering.

Object Editor. The object editor and TNTedit now support direct creation, access, and editing of ESRI's E00, Coverage, and Shapefiles.

You can now create and edit vector objects at several new topological levels and convert between them: polygonal, planar, and network. As a result, new vector capabilities are available via these topologies such as assigning a different Z value to each vertex. Z values can be obtained by overlaying an elevation raster. Default records can now be assigned when elements are added. All the vector filters are now provided via a new interface and can be visually tested and evaluated. This filtering approach can also be applied to a vector object via a separate menu process.

Ortho/DEM. SPOT images can be converted to orthoimages using a DEM.

Network Analysis. A new network analysis process uses a network vector object, a network control attribute table, and attributes attached to nodes and so on. You can now compute constrained paths through a complex network system with waypoints (in other words, intermediate stops). Allocations can also be performed to determine areas served by all possible routings.

Surface Fitting. Breaklines and polygons can be edited into or inserted from vector line elements into TIN objects in the object editor. These lines/polygons can represent drainage, ridges, lakes, and other features and will become boundary conditions in the appropriate surface fitting process.

A new bidirectional surface fitting procedure has been added. It is specifically designed to improve model surfaces from data collected in parallel lines such as in geophysical or other transect-like surveys.

Mosaicking. Bundle adjustment has been improved using single value decomposition methods to handle indeterminate cases. Two new and better edge feathering methods have been added. Multiple bands can be mosaicked at one time.

SML. SML has extensive additions of 237 new functions and 96 classes. New suites of functions are included for displaying and viewing objects; drawing in views; GPS inputs; creating dialog boxes; and the creation and management of database, vector, and region objects.

All documentation of functions is maintained on-line in SML. An icon editor is embedded, and icons can be used in scripts on toolbars. Scripts can be encrypted. Scripts can be saved as objects in project files along with the geodata they can be applied to. This provides a new product delivery procedure called APPLIDATs where the scripts automatically act on the geodata in the same project file to create self-contained products.

Tutorials. Five new tutorial booklets are enclosed:

• Managing Geoattributes • Rectifying Images

• Introduction to Map Projections • Constructing a HyperIndex

• Changing Languages (Localization)

Dropping Platform

MicroImages proposes dropping the Solaris 1.x version of the TNT products from the V6.00 TNT product CDs to make room for other materials. It appears that everyone using Sun workstations has migrated to some version of Solaris 2.x. The TNT products will still be compiled, checked, maintained, and can be provided on custom-made CDs to anyone who specifically requests them. If this will be a problem to any clients still using TNT products on Sun Solaris 1.x, please notify MicroImages as soon as possible.

Priority of Features for V6.00

As usual, it is not clear if the MicroImages software engineers will get all these tasks done for V6.00. Thus the following list only represents our current priority efforts and plans. The designation [available now] means the feature has already been added since the V5.90 CDs were created and can be tested in beta form by downloading the process(es) involved.

System Level. Provide option to install Getting Started booklets. Modify TNT products to start Adobe Acrobat and access each booklet from your hard drive or a CD. Issue separate TNT geodata CDs, removing some datasets from the normal TNT product release CDs. Provide a means to switch between languages while the TNT products are operating. Icons, "Add All" objects in a Project File or directory, and other improvements have been added to the Object Selection dialog [available now].

The FLEXlm floating and multiple user license manager is being upgraded to the latest V6.1.

Visualization. An alternate ArcView-like layer control panel will be added for use in simpler visualizations. It will integrate that product's useful automatic legend generation features. It will be especially useful in creating products in SML.

Smoothing simulation paths by splining in XYZ will be added. A new profile window will show the path relative to all layers involved. Faster "direct" (in other words, not MPEG) simulation will be implemented by precomputing a 16-bit surface and draped layer.

An alternate simpler 3D view control will be added to control only key parameters, especially in SML scripts.

The measure, select, and sketch tools will be integrated for easy use and handling of overlapping features.

Layouts. Legend layout and presentation will be improved.

GPS. [most available now] GPS log files can be recorded or created in simple comma-separated-value text files. When they are used as virtual GPS sources, a dialog is provided to play, set playback speed, interpolate intermediate positions, rewind the log, close the log, and so on.

Wherever a GPS is used in TNT products, it can be either a real or virtual device. A Status and Control dialog can be exposed for each active device showing position, speed, heading, accuracy, number of satellites, and so on. It also provides the controls for selecting symbol style, size, color, and so on in all active views.

A GPS menu item and icon appear on all views to select devices to show cursors, set up a device, open log file, toggle auto scrolling, select reporting units, and so on. A dialog box is provided to set up and configure each new GPS device selected in a view, accessed in SML, used in graphical editing, and other locations. Multiple GPS devices (mixed real and virtual) can be selected. Cursors will be shown for each device. You can designate which cursor controls the scrolling, optionally automatically change scale for diverging cursors, and so on. When GPS devices are available, an attempt will be made to reconnect to the devices when a view is opened.

Styles. The line style editor will be improved. A hatch pattern editor will be added to create, edit, manage, and store "line" fill patterns. Processes which fill polygons will be modified to use them. A new feature will support the insertion of symbols and characters into line styles as they are rendered. Support to copy styles between objects will be added.

Import. Create an import to CAD and vector objects for the native MapInfo format often referred to as TAB. Based upon possible success in this, all the other things like export, linking, and direct use in the object editor may be possible. When importing lines from ASCII files, an option will be added to create nodes for the vertices and attach attributes to handle geophysical and other transect data sources.

Classification. [available now] In supervised classification routines, you can now view a histogram for each class of the distance from its center of each cell assigned to the class. Tools are then provided to graphically split this class into two classes and recompute the classification results.

Hyperspectral Analysis. A new interpretation method called Vector Quantification has been added [available now]. The range of image bands to be processed can be selected and will control all subsequent processes [available now]. This will be expanded to allow the exclusion of atmospheric absorption bands. The hypercube object is being worked on now.

Networking. Additional network analysis features and improvements will be added.

Geophysical Analysis. More line leveling optional approaches will be added [one available now] as well as reduction to the magnetic pole. The first tool needed to edit geophysical profiles has been added in the object editor [available now]. More features to assist in importing and storing generic transect data will be added.

Object Editor. A new Profile window has been added where a selected line element can be viewed and Z values edited including splining [available now]. New cross-section sketching tools will be added. A "node-turn" table (for example, right turns only) will be added for use in network routing. A feature will be added to step through all selected elements to identify those without attributes. You will be able to convert nodes to points. A semiautomatic tool will be added to locate label points for contours and other "parallel" line element situations.

Polygon Fitting. The Adaptive Kernel and CALHOME methods of polygon fitting (in other words, home range) will be added.

SML. Major expansion of the TNT geospatial programming language will continue to support your development of TKP and APPLIDATs. An HTML interpreter is being developed for use in various locations in the TNT products. It will appear first in SML to allow the easy creation of scripts for presenting instructions. A method will be created to share common script segments between scripts. You will be able to create and control more layers in the view window: map-grids, scale bars, regions, SML scripts, and so on. New suites of functions will include:

• import and export of objects

• printing

• surface modeling

• layout with control over positions in groups [available now]

• conversion between 8-, 16-, 24-bit, and composite rasters

• conversion between color models: RGB, HIS, HBS, CMY, CMYK, ...

Bench Marks. Begin to release SML scripts which will run standard geospatial analysis tests on any TNT platform and report time to complete. These can then be used to evaluate the performance of the TNT products under various hardware and network configurations.

Internationalization. [available now] A utility process will merge the translated resource (language) files for any previous version of the TNT products with those of a newer English release. Those items which need new translation will then appear in English in the merged file.

Tutorials. The most effort will be focused upon bringing the existing tutorials concurrent with the features in this version. The following new booklets will be released:

• Using Hyperspectral Analysis

Sharing Geodata with Other Products

• Operating the 3D Simulator

• Installing the TNT products

Editorial and Associated News [by Dr. Lee D. Miller, President]

Introduction.

Yes, I write this MEMO to you every quarter. And, yes, it does get to be a big chore when it gets so long. Yes, it does give me gray hairs. But, this is my own fault. I can always choose to say it "short" or say it "long" and request less input and fewer color plates.

I create this MEMO from draft inputs from most of the other MicroImages staff, but try to make it look as if it were written by one person with a common vocabulary. Other staff also complete and print the attached color plates. However, in writing every MicroImages MEMO, I prefer to stay in the background, as it does represent the combined efforts of everyone at MicroImages, all of whom contribute in various ways to each new version of the TNT products, of which this MEMO is just a part. Every staff member has had the opportunity to read, edit, and modify every MicroImages MEMO which has ever been released.

While this MEMO takes a lot of effort from everyone, it is important. Without it, you would not know what to look for in each new release. Its creation and review forces everyone at MicroImages (especially me) to "get it all together" and understand what we have individually and collectively accomplished to date and what it contributed to the TNT products. From this we each get the sense of accomplishment that drives MicroImages. From it, we also all plan what we will be doing for you in the next quarter. This last objective is perhaps the most important from a management viewpoint when a dedicated group of professionals with diverse backgrounds are set upon creating and releasing products and features that a much larger company cannot keep up with.

During the last quarter, there were a significant number of TNTmips systems upgraded from years back, including several from DOS MIPS (V3.33 and earlier). Unfortunately, clients, who for various reasons, let their TNT product maintenance lapse for a year or two get way behind, as they do not get the interim quarterly MEMOs. As always, anyone (including our competitors) can access microimages.com and review all our previous MEMOs prior to buying or starting to use the latest version of the TNT products.

I have decided with this MEMO that I might also like to occasionally "have my say". As a result, you may periodically find this section where I present my viewpoint on some topic of possible interest.

Hyperspectral.

This MEMO is longer than usual due to the extended sections introducing the Hyperspectral Analysis process and the long sections on SML and the new APPLIDAT, both of which are important extensions of TNTmips. Hyperspectral analysis is a complex subject, and I am sure I have not gotten everything in this section correct and will have to eat some of my words later once I hear from you. I have also taken the usual manager's prerogative to delegate and have assigned more tasks out to the staff to research various image sources and create two Getting Started booklets. One booklet will summarize this section and new materials into an introduction to what hyperspectral imaging is all about, and another (to be completed first since we have the summary in this MEMO) on how to use these new analysis tools in TNTmips.

In compiling this long section, I had to dust off my old memory cells on one of my major areas of research of 20 to 25 years ago. To create some creditability in this complex area, I am attaching a summary of this work as Appendix A. At the same time, several of us had to dig around a lot to get some insight into what is being done by others in scientific labs, practical exploration settings, and included in other commercial products. The section on this topic introduces what we have learned to date. I welcome your input so that we can strengthen our knowledge in this area.

On Earth. By pure coincidence, just as this section, our process, and this MEMO were being completed, NASA, via the commercialization program at the NASA Stennis Space Center in Mississippi, issued a "call for proposals" for projects to define in Step 1 (in other words, these two year projects) the long-term commercialization of hyperspectral applications. Step 3, to be undertaken in several years, would be the design/economic study for the launch of commercial hyperspectral satellites. I was privileged to be able to draw upon the new ideas generated by MicroImages' staff, the ideas of other participants in our proposal, and my own remembered past research experience, to submit a data analysis plan in a proposal. This proposal was submitted from California State University at Monterey Bay (CSUMB) on behalf of NASA/AMES, the Spatial Information, Visualization & Analysis (SIVA) Center at CSUMB, MicroImages, and another agricultural industry partner representing an application area. Proposals are probability games, but the next MEMO will report the outcome.

Around Jupiter. By a second coincidence, during the last stages of writing this MEMO section, a long-time client called me for a letter of support for a proposal call from NASA Headquarters. This scientist used DOS MIPS and the then newly created Feature Mapping 10 years ago to trace out the water bodies on Landsat images defining the land based portion of the rim of the Yucatan dinosaur-killer crater. At the time, he was studying the relationship of the distribution of these water bodies to the incidence of malaria in the area.

His latest project is to assemble a raster based composite GIS system of the images being collected by Galileo of the three ice moons of Jupiter for public access and use. The projections needed for these moons are spherical and can be easily handled as accurate shapes, for these moons have not been measured. Georeferencing is provided by the accurate pointing and positioning provided by the other project participant, who is the Galileo Project Team leader at JPL. Three of the four Galileo imaging devices collect hyperspectral images of a total of ~890 spectral bands ranging from .05 µm (the extreme vacuum ultraviolet) to 5.20 µm (the thermal infrared). The fourth has seven higher resolution bands in the visible and "photographic" infrared range, which is of less interest to astrophysicists and astrogeologists. Since all the original images fit within TNTlite, information about this new process was FAXed to this client to strengthen his proposal.

This project would be of particular interest to the computer science oriented people at MicroImages, as they are very interested in science fiction and thus real science. I would also find it of particular interest for similar reasons, because as a naive graduate student, I completed a funded research project for NASA 35 years ago culminating in this report.

Investigation of a method for remote detection and analysis of life on a planet. University of Michigan, Institute of Science and Technology, NASA Report 6590-4-F, Ann Arbor, Michigan. 1965. 33 pages.

In this study, I assumed that such life would be carbon based and contain chlorophyll, which could be easily determined from the collection of geological and vegetative spectral curves I had amassed. Since we humans subsequently orbited devices to look back at the earth, I was proven right in at least one case. This paper was also presented at a symposium where Dr. Carl Sagan, who was also in his formative years, was happy to discuss with me his new ideas. Fortunately, he went on to form even better and more interesting ideas.

RADAR.

Last week, RDL Space Corporation, located in California, was awarded the first ever U.S. Government license to build and launch an SAR Satellite (Space News, Vol. 9, No. 25, front page). Earliest launch date for this RADAR-1 would be in 2001. As you may know, there has been a lot of controversy over this license application, as RDL will build a one meter resolution system. The license has been granted based upon restricting the distribution of the one meter imagery for immediate use by our national security agencies, five meter degraded images for general public sale, and higher resolution images for "particular customers" on a case-by-case government approved basis (noted elsewhere as other "U.S. defense agencies or the governments of key American allies").

"Several U.S. firms have already been licensed to operate 1-meter optical satellites with minimal restrictions. The Pentagon insisted on tougher restrictions for RDL because radar satellites can detect things that optical satellites with comparable image resolutions cannot."

But, as with the optical satellite licenses, the resolution of the public images can be increased automatically to any resolution provided by similar satellites of other nations. Since Canada has announced the launch of Radarsat-2 in 2001 with three meter imaging capabilities, the five meter barrier may never exist.

The article continues on to note that while "RDL Space Corp. is targeting national security markets both in the United States and overseas," Dutt said. "Other potential markets, such as crop monitoring, mineral exploration and insurance, will have to be developed."

So what? A couple of weeks ago, MicroImages learned that a proposal submitted by RDL and the Spatial Information, Visualization & Analysis (SIVA) Center at California State University at Monterey Bay (CSUMB) will be funded by the commercialization program at NASA Stennis Space Center as a Step 2 project to demonstrate the use of high resolution RADAR imagery in the areas of agricultural insurance and precision agriculture. MicroImages will be funded as the third, smaller participant in this project to provide the software modifications needed to support these applications via the standard TNT products. The project will employ overflights of high resolution SAR imagery produced by a JPL and NASA/AMES aircraft program called AirSAR flying over test sites in California and elsewhere.

The Cathedral or the Bazaar.

MicroImages has been questioned and addressed over the years about its unusual practices in the area of frequent upgrades (now biweekly), fast error support, user driven feature add-ons, extensive user communication, free TNTlite, on-line manuals, and so on. V5.90 adds another question to this list: Why make it possible for you or others to create and sell or give away the new SML based TurnKey Products and APPLIDATs within the free TNTlite?

An extraordinarily good article on these general ideas was published on the Internet by Eric S. Raymond, 1998. It is entitled The Cathedral and the Bazaar, and can be obtained at http://sagan.earthspace.net/esr/writings/cathedral-bazaar/. It is the first in a series of three articles and contains some very insightful ideas of how excellent software can be developed. Netscape Communications Corp. has stated that this paper helped spur them into opening up the source code of their Communicator product line (for example, browsers) to developers earlier this year. It is well worth the trouble to obtain and read. I found it to be one of the most interesting computer articles I have read in years. This was probably because it made me sort out the reasons for my past decisions at MicroImages wherein I found that they agreed with those of the author.

Raymond's Cathedral approach is the approach of SUN, IBM, and the like. Control everything, design in infinite detail, release infrequently with extensive checking, and so on. Build software as you would build a Cathedral. The Bazaar approach has produced LINUX, the only serious pending competition to Microsoft Windows. It has produced Apache, which is being used for more web servers than all its competitors and has just been supported for use by IBM. Certainly the Internet as we know it would not exist without the standardization first introduced by Mosaic, and it is only the Internet which has given rise to the Bazaar approach. Raymond points out that the Mosaic to Netscape and now back to free Netscape source code was a transition of the Bazaar approach to Cathedral and back now to Bazaar.

Clearly some of the most powerful and useful software now available to us is suddenly originating from this new, open software development model. It is being built, rebuilt, and improved by many smart people with the simple motivation of sharing in the excellent result or in the self interest of having the result. The paper reviews many guidelines as to how and why many programmers in the world work together for free to produce a very robust LINUX, Apache, many UNIX applications, and so on. His arguments are very pervasive with regard to how future software, free or paid for, must be developed. As a corollary to his paper, it is also clear that far bigger commercial companies than MicroImages, and far smarter heads than mine, are trying to figure out how commercial software can coexist in this new Bazaar. And coexist it must, as without a large, viable commercial software industry, who will employ those software engineers who so readily contribute their efforts in the Bazaar?

Raymond's paper details how software can be developed using LINUX, Apache, Mosaic, and other similar open software development as a model. It clearly identifies the rules which govern software development by such a model. I think the Bazaar system is also a model for how commercial software products can evolve based on inputs from clients to a smart and responsive group of professional software engineers. While reading it, I even found out why MicroImages has haphazardly evolved into a "Bazaar-like" method of product development over the past 12 years.

For example, the Bazaar approach has as one of its foremost rules to release reasonably tested versions frequently and then respond to errors as effectively as possible. It points out that users of the product will trade off some rapidly fixed errors in exchange for their input into the evolution of the product. It continues on with many more insights into how software will have to be developed in the future with a world-wide, intelligent software user community; very rapid information exchange via the Internet; and more complex software that requires increasingly more cooperation among the software engineers, management, and the end users.

MI/X (MicroImages' X Server)

Windows 98.

The minor modifications to permit the Windows version of the MI/X server to be used with Windows 98 operation have been incorporated. The U.S. Justice Department notwithstanding, Windows 98 shipped on June 25th. To setup MI/X to use multiple monitor support under W98 or NT5.0, go to the MI/X section under "Support/Set-up/Preferences …" in TNTmips or to the "Options" icon in TNTview or TNTedit. The option will appear only if the system has multiple monitors attached. MicroImages strongly underlines that by its very nature, the productivity of your use of geospatial analysis will be significantly increased if you use two or more monitors under W98 or set up a large scrolling area on a single monitor by using a single display board with at least 4 MB of video RAM (VRAM). A color plate is attached entitled Increase Productivity with Windows 98 to illustrate and explain this idea in more detail.

Free MI/X.

Downloads of the MI/X servers by non-clients from microimages.com now average approximately 1500 per week (100 for 68xxx Macs, 200 for PMacs, and 1200 for Windows products). This quarter, MicroImages gave permission for four book and magazine publishers to include a version of MI/X on CDs accompanying their magazines and books (one in Japan, one in Slovenia, and two in Germany.) It appears from user response that MI/X is more robust than the currently available X servers for Windows and MacOS, including Apple's MacX. This is likely the case since the TNT products have stressed and thus forced the perfection of MI/X more than most other smaller X activities on PC platforms, except LINUX which has its own X servers. One of the most popular free uses of MI/X is to allow many PCs and Macs to communicate and work with LINUX/PC based servers.

There were 31 additional mirror sites added around the world this quarter to serve up MI/X, bringing the total of mirror sites to 91.

Macintosh

There were no special adjustments made to accommodate the use of the TNT products on this platform.

Licenses

A number of clients are switching from single user licenses to floating and multiple user licenses. Please remember that when such a change is made, only a single existing license can be traded in for credit as part of such an upgrade.

TNTliteTM 5.9

General.

* All the hyperspectral processes in TNTmips are free via TNTlite. This is described below in the section on this new process. The limit on the number of spectral bands which can be processed at one time in TNTlite has been removed to accommodate hyperspectral image analysis within TNTlite.

The direct full downloads of TNTlite from microimages.com from January through June 1998 were 2.5 times more than the number in the same period for 1997. Downloads for LINUX are increasing and now regularly equal or just exceed those for the PMac.

Shipments of TNTlite kits continue at approximately the same rate. The largest new client order of TNTlite 5.8 kits was for a forestry department at a university in Europe. The order was placed without any previous direct contact with MicroImages or a dealer except possibly via microimages.com.

LINUX.

Many of you are beginning to notice more and more publicity about the LINUX operating system. A number of academic groups have indicated to us that they have set up TNTlite on PC servers supporting multiple users in both floating and multi-user modes. One geography department has four such LINUX networks with TNTlite installed. This works very effectively as we supply the free MI/X for the other Windows and Mac computers which act like terminals when a multi-user server is employed.

Getting Started Booklets.

A section below discusses the 38 Getting Started tutorial booklets which are now available. It suffices to say here that all these booklets with associated geodata sets can be downloaded free for use with TNTlite as Acrobat PDF files or PageMaker 6.5 files. Those purchasing the physical kit version of TNTlite 5.9 will find it includes printed versions of all the 38 current booklets, modest bookshelf storage boxes for the booklets, and that all the PDF files and the sample geodata sets they use are on the V5.90 CD.

Modifications since V5.90 CDs.

* The limits on TNTlite raster objects have been lifted from the product of 640 by 480 (307,200 cells) to the product of 614 by 512 (314,368 cells) to accommodate full AVIRIS images.

TNTatlas® 5.9

This process can connect to GPS devices to show cursors and scroll the view when the GPS position nears the edge of the view.

The following 47 page reference on how to create TNTatlases has been prepared by one of MicroImages' dealers. Mastering TNTlink and the Presentation of GeoSpatial Datasets with TNTatlas. "A Rambling Yet Roughly Instructional Guideline For Getting Creative and Productive with TNTlink." Prepared for your use free of charge by Dr. Thomas H. Furst, Furst Light GeoTechnologies, 2295 Dexter Drive, Suite 200, Longmont, Colorado 80501-1515. Voice (303)682-3046 FAX (303)682-3157 email tfurst@lanminds.net. It can be viewed or downloaded from www.microimages.com/ documentation/tntatlas/tfurst.

Installed Sizes.

Loading TNTatlas 5.9 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.

 

PC using W31

16 MB

PC using W95

19 MB

PC using NT (Intel)

19 MB

PC using LINUX (Intel)

17 MB

DEC using NT (Alpha)

19 MB

PMac using MacOS 7.6 and 8.x (PPC)

33 MB

Hewlett Packard workstation using HPUX

20 MB

SGI workstation via IRIX

22 MB

Sun workstation via Solaris 1.x

19 MB

Sun workstation via Solaris 2.x

20 MB

IBM workstation via AIX 4.x (PPC)

21 MB

DEC workstation via UNIX=OSF/1 (Alpha)

22 MB

 

TNTview® 5.9

Changes.

No specific changes were made for TNTview alone. However, many other changes were made in processes provided as part of TNTview. These changes are explained in detailed descriptions provided in the TNTmips New Features section and in the attached color plates. The improvements include the:

• new 3D simulation process

• expanding GPS support

SML additions

• all improvements in the visualization process

• viewing of the new topological structures vector objects

When TNTview is installed, a second icon representing an APPLIDAT will also appear. An explanation of this new kind of product can be found in a detailed section under TNTmips New Features. You can create and use APPLIDATs and other TurnKey Products via TNTview.

Upgrades.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground. (+50/each means $50 for each additional quarterly increment.)

:

 

Price to upgrade from TNTview

TNTview Product

V5.80

V5.70

V5.60

V5.50

V5.40

V5.30 and earlier

W31, W95, and NT

$95

170

225

275

325

+50/each

Mac and PMac

$95

170

225

275

325

+50/each

LINUX

$95

170

225

275

325

+50/each

DEC/Alpha via NT

$125

225

300

350

400

+50/each

UNIX single user

$155

280

375

425

475

+50/each

 

For a point-of-use in all other nations with shipping by air express. (+50/each means $50 for each additional quarterly increment.

 

 

Price to upgrade from TNTview:

TNTview Product

V5.80

V5.70

V5.60

V5.50

V5.40

V5.30 and earlier

W31, W95, and NT

$115

205

270

320

370

+50/each

Mac and PMac

$115

205

270

320

370

+50/each

LINUX

$115

205

270

320

370

+50/each

DEC/Alpha via NT

$150

270

360

410

460

+50/each

UNIX single user

$185

335

450

500

550

+50/each

Installed Sizes.

Loading TNTview 5.9 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.

PC using W31

23 MB

PC using W95

27 MB

PC using NT (Intel)

27 MB

PC using LINUX (Intel)

22 MB

DEC using NT (Alpha)

28 MB

PMac using MacOS 7.6 and 8.x (PPC)

39 MB

Hewlett Packard workstation using HPUX

27 MB

SGI workstation via IRIX

31 MB

Sun workstation via Solaris 1.x

25 MB

Sun workstation via Solaris 2.x

26 MB

IBM workstation via AIX 4.x (PPC)

30 MB

DEC workstation via UNIX=OSF/1 (Alpha)

32 MB

 

TNTedit™ 5.9

All the features added to TNTmips in the processes supplied as part of TNTedit have been correspondingly updated. All the new features in the following major sections apply. Please review them below:

 

 

System Level Changes

Display Spatial Data

 

3D Simulation

GPS Input

 

Import/Export

Vector Filtering

 

Object Editor

Geospatial APPLIDATs

 

SML

Internationalization

 

The most significant single addition to TNTedit is the ability to directly access, edit, and save ESRI's E00, Coverage, and Shapefiles. See the section on the object editor below for details.

Upgrading.

If you did not order V5.90 of your TNTedit and wish to do so now, please contact MicroImages by FAX, phone, or email to arrange to purchase this upgrade or annual maintenance. Entering an authorization code when running the installation process allows you to complete the installation and immediately start to use TNTedit 5.90 and the other TNT professional products it provides to you.

If you do not have annual maintenance for TNTedit, you can upgrade to V5.90 via the elective upgrade plan at the cost in the tables below. Please remember, new features have been added to TNTmips each quarter. Thus, the older your current version of TNTedit relative to V5.90, the higher your upgrade cost. As usual, there is no additional charge for the upgrade of your special peripheral support features, TNTlink, or TNTsdk, which you may have added to your basic TNTedit system.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground.

 

TNTedit Product Code

Price to upgrade from TNTedit V5.80:

D30 to D60

$175

D80

$225

M50

$175

L50

$175

U100

$300

 

 

For a point-of-use in all other nations with shipping by air express.

 

TNTedit Product Code

Price to upgrade from TNTedit V5.80:

D30 to D60

$225

D80

$275

M50

$225

L50

$225

U100

$350

 

 

Installed Sizes.

Loading the TNTedit 5.9 processes onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.

 

PC using W31

41 MB

PC using W95

50 MB

PC using NT (Intel)

50 MB

PC using LINUX (Intel)

34 MB

DEC using NT (Alpha)

51 MB

Power Mac using MacOS 7.6 and 8.x (PPC)

55 MB

Hewlett Packard workstation using HPUX

44 MB

SGI workstation via IRIX

52 MB

Sun workstation via Solaris 1.x

40 MB

Sun workstation via Solaris 2.x

40 MB

IBM workstation via AIX 4.x (PPC)

50 MB

DEC workstation via UNIX=OSF/1 (Alpha)

54 MB

Getting Started Booklets

Introduction.

The collection of Getting Started tutorial booklets continues to expand. Five new booklets are being shipped with V5.90. Currently the series contains 38 booklets, all of which have been provided to you. The available booklets now contain over 800 color pages which provide the equivalent of three good sized textbooks of material on geospatial analysis. As usual, the sample geodata sets used in each booklet have also been included on the CD and on microimages.com. Almost all of this geodata is sized so that it can be used in the TNTlite product.

 

IMPORTANT: It is becoming clear that some of our experienced clients are not using the Getting Started booklets!

 

Before these booklets were available, MicroImages planned that it would take six months to a year for a new MicroImages software support specialist to "come up to speed"; perhaps as long as 12 months for them to achieve the same breadth of knowledge about TNTmips that they can now gain in one month devoted to completing all these tutorials. If you are the boss, it is particularly important to set a new employee in front of TNTmips or TNTlite and have them take the first month to go through each tutorial. Even if they are experienced in using some other GIS or IPS software, this month will pay handsome dividends in the speed, but more importantly the breadth, of what they will accomplish for you. If you are in a hurry, you might consider paying them a bonus for each booklet they complete at home.

Previously Completed Booklets. [33 units already in your possession]

Announcing TNTlite

Surface Modeling

Displaying Geospatial Data

Georeferencing

Feature Mapping

Theme Mapping

Editing Vector Geodata

Image Classification

Editing Raster Geodata

Navigating

Making Map Layouts

Mosaicking Raster Geodata

Importing Geodata

Building and Using Queries

3D Perspective Visualization

Interactive Region Analysis

Pin Mapping

Acquiring Geodata

Managing Databases

Making DEMs and Orthoimages

Style Manual

Vector Analysis Operations

Spatial Manipulation Language

Using Geospatial Formulas

Exporting Geodata

Creating and Using Styles

Editing CAD Geodata

Filtering Images

Editing TIN Geodata

Getting Good Color

Combining Rasters

Sketching and Measuring

Digitizing Soil Maps

 

 

New V5.90 Booklets. [5 new units shipping]

Managing Geoattributes

Rectifying Images

Introduction to Map Projections

Constructing a HyperIndex

Changing Languages (Localization)

 

 

Reissued after V5.90. [2 units, download now from microimages.com]

Interactive Region Analysis

Theme Mapping

 

Scheduled for V6.00. [4 units]

Using Hyperspectral Analysis

Operating the 3D Simulator

Network Analysis

Installing the TNT products

 

 

Possible Future Booklets. [18 units]

Sharing Geodata with other Software

TNT Technical Characteristics

Scanning

Vectorizing Scans

Using the Software Development Kit

Surface Analysis Operations

Using the Electronic Manual

Introduction to Hazard Modeling

Modeling Watersheds and Viewsheds

Extracting Geodata

COGO

Introduction to Remote Sensing

Introduction to GIS

Introduction to RADAR Interpretation

Introduction to Hyperspectral Analysis

Introduction to Digital PhotoInterpretation

Introduction to Creating Management Zones for Precision Farming

Introduction to PseudoDOQs from 35 mm Slides

 

Keeping Up.

Constant Changes. Some of the booklets released prior to V5.90 have been upgraded to reflect changes in the associated processes, for example, changes associated with the layer control panel introduced in V5.80. The Building and Using Queries, Sketching and Measuring, and Displaying Geospatial Data booklets have been updated and are on your CD. The V5.90 TNT product CD contains all the latest booklets which were available at the time of the CD duplication, and you can view them on-line or print them out in color.

Since the duplication of the V5.90 CDs, the Interactive Region Analysis and Theme Mapping booklets have been revised to be current with V5.90 and can be downloaded from microimages.com. The following booklets are significantly out-of-date relative to V5.90 and require significant changes: Laying Out Maps, Using SML, and Image Classification.

Remember, any modified, improved, and new booklets are immediately posted on microimages.com for your immediate access in PDF and PageMaker formats. All associated geodata, its modification, or corrections for booklet updates is also posted at the same time. Thus, while you may have all the booklets, their updating and expansion may take place at any time. It is a good practice to check the status of the booklets on microimages.com every couple of weeks. Each booklet contains a date on page two which the author changes each time it is modified.

Draft Releases. Via microimages.com, MicroImages is going to get "draft" or "beta" tutorials posted for you as rapidly as possible. This is the same practice that is used with new software features released each Tuesday and Thursday. Many of you are moving so fast in advancing your geospatial analysis activities, applications, and needs, that even MicroImages is pressed to keep up with you in writing software and instructions on how to use it. Rapid updating of tutorial materials which are nearly complete (in beta or draft form) gets you most of what you need as rapidly as possible, which is certainly better than having nothing at all for many weeks or months more while it is "made perfect".

Status Table. By the time you read this, you will be able to print out a simple table from microimages.com showing the status of each booklet and its geodata currently posted for your downloading. This table will contain the booklet's name, the date of last revision (as on the second page), the TNT version it is concurrent with, the name of associated geodata set(s), the date the geodata was last changed, and so on. Print this table each time you visit microimages.com and compare it to the last table you printed and saved to see if anything has changed which you need to download. Since it is very important for you to track the changes in these tutorial and reference materials, the "What's New" access on the front page at microimages.com will indicate the most recent date that any modification was made to any of the Getting Started materials and also provide direct access to this status table.

Error Reporting. Certainly the Getting Started booklets and their associated geodata sets can have errors which you should report immediately via software support (as some of you are already doing). When you do so, you will find that you get the same kind of error code returned (for example, ldm2037) as with software errors. These errors are managed by MicroImages as software errors, and their priority and status can be checked by entering this ID code at www.microimages.com/support/features. Corrections of these errors will often cause a new version of the booklet, and especially the geodata, to be posted.

Translations. Earth Intelligence Technologies Co. (EIT), the MicroImages dealer in Thailand, has carried the use of the Getting Started booklets into their plans for the future. This dealer's staff has shared the work of selecting a dozen of the booklets they consider most important in PageMaker format, abstracted important pages, translated them into Thai, and laid them out again in PageMaker in 8.5" by 11" format. This resulted in a 177 page printed and bound reference book with a nice cover which they are distributing to Thai universities without charge along with TNTlite. Four sample pages chosen at random in their "Getting Started" book are enclosed so you can see the quality of what they have done. MicroImages has spent time creating TNTlite for student use, and we certainly appreciate the time this company has spent, in a poor economic environment, in helping their nation's students.

Abstractions. Some of you may also wish to extract material from the booklets for use in your own printed manuals, guides, translations, and other reference materials. The Adobe Acrobat Reader is excellent for viewing and printing the color booklets. However, it is not possible to extract illustrations from the PDF files or translate their text to other languages. MicroImages now creates these booklets in a standard fashion in Adobe PageMaker 6.5 from which the PDF files are created for inclusion on the CD. In response to your requests, the PageMaker files as well as the PDF files for the latest version of each booklet can now be downloaded from microimages.com.

Hardcopy Upgrades. All Getting Started booklets are included in black and white printed format along with the CD in each TNTlite kit shipped. The current price of an individual kit is $40, and at this time it will include 38 or more booklets. Additional printed Getting Started booklets are added into the kit as they are completed during the quarter. If you need printed copies of any or all the printed booklets, please order a new TNTlite kit.

All new TNTmips professional product shipments contain all the published printed booklets and the associated geodata. Existing MicroImages clients with active maintenance contracts get all new booklets published that quarter in their upgrade shipment. You are also free to duplicate the published booklet, duplicate it via the PDF file, or cannibalize its contents via the PageMaker file as long as the source of the information continues to be credited to MicroImages.

Future Plans. A goal for V6.00 will be the upgrade of all existing booklets to be current with V6.00 except for any completely new processes or features introduced in V6.00 at the last minute. To provide time to achieve this "version concurrency" in all published booklets, only a minimum of four new booklets have been scheduled for release with V6.00. Furthermore, from now on, priority will be given to upgrading and maintaining existing booklets to be as current as possible over creating new booklets.

TNT Reference Manual

Status.

The Reference Manual this quarter has 2621 single spaced pages (a decrease of 266 pages). Most of this reduction results from the removal of the Appendix containing the documentation for the SML functions as well as other streamlining. All this documentation is now part of and integrated into the SML process. This means that if you download a new version of the process (as many do), you will get this built in documentation rather than using a copy of the Reference Manual which is issued only quarterly and therefore several months out of date. This is particularly important as SML is evolving so rapidly.

The Reference Manual installs into 32 MB with the illustrations or into 7 MB without them. Last minute supplemental sections which do not occur in the on-line HTML version or Microsoft Word version were created for new processes and features. These sections were completed for V5.90 after the master CDs were created for the reproduction process. These 34 additional pages are included in supplemental, printed form as follows.

3D Simulation (9 pages)

Orthorectification Mode [SPOT] (6 pages)

Vector Filters (19 pages)

Context Search Engine.

V5.90 now provides a search engine (a Java applet) for your Netscape or Explorer browser. It is automatically used when you access the Reference Manual, select search, and enter a key word you wish to locate. Use it to view any page of the text except the "Volume Index" and "Table of Contents" which do not have links. The built-in document search tool in the browser can be used to search the "Table of Contents".

At the top and bottom of every page you will find a set of links. For example:

Next Page Bottom of Page Table of Contents Index Search

By clicking on "Search" you will instruct the browser to load the search engine. There will be a small delay while the search database is loaded. Type the words relating to the topic you are interested in at the "Enter query:" prompt. The results list provides links to all the pages that contain the selected words. The document search tool built into your browser can then be used to find specific occurrences of the words. This Java applet requires Microsoft Internet Explorer 4.x or Netscape Navigator 4.x to work. It also requires that Java be enabled in the browser or it will not run.

New TNT Features

* Paragraphs or main sections preceded by this symbol "*" introduce significant new processes or features in existing processes released for the first time in TNTmips 5.9.

* System Level Changes.

System.

The file/object selection dialog now allows double-clicking on an object when selecting multiple objects in a "set" (for example, selecting Red, Green, Blue rasters). This eliminates the step of first highlighting the object and then pressing the [==>] button.

Error messages can be saved to a text file. Please save and send this message file when communicating with MicroImages concerning these errors.

Away from your desk? Multitasking by doing other things? There is an option you can set in the TNT preferences to have TNTmips "beep" when a process is complete (use Setup/Preferences).

The unit list has been expanded to include over 60 new scientific unit types, such as "density", "concentration", "mass", and so on. Unit names can now be localized into your language.

 

IMPORTANT: You can use the "Print Screen" key on a PC or the shift-command-3 keys on a Mac to save your entire display area to an image file from within the TNT products. You can then select and insert these image files into Windows products such as Word Perfect 8 or Word 97 for PC or Word 98 for Mac. These products also support direct cropping, annotation, ... so that these TNT images can be modified for attractive use in your reports and other documents.

 

New Menus. The TNTmips menus have been reorganized to provide a more logical arrangement for the many new processes which have been added over the last five to six years. A three page table is enclosed entitled Converting from Version 5.8 to 5.9 TNTmips Menu to assist you in finding the processes and features in the new structure. Perhaps you might like to temporarily post these pages above your system.

Project File - Vector Objects.

New topology types are now defined for vector objects. These additional vector topologies will be transparent to you until you choose to use them. The topology types were created to handle different mapping and problem solving requests from you and planned new features in the TNT products. For example, your requests for expanded network applications required the creation of the new network vector topology option. The different topology types are introduced below. A color plate is attached entitled New Vector Topology Types to help describe these new topology types. A second color plate is attached entitled Behavior of Topology Types.

1) Polygonal Topology. This topology is the type for all vector objects before V5.90 of the TNT products. It requires that none of the line elements within it intersect, and thus all lines meet at node elements. All polygons are assembled from this node-line topology and are maintained by all processes. No two polygons can overlap, and any polygon completely inside another is an island of that polygon. A common example of this topology would be a property ownership map wherein all land and water is accounted for in polygons without dispute. "A place for everything and everything in its place." Vegetation and soils maps are additional examples of this type.

2) Planar Topology. This topology also requires that none of the line elements in the object intersect and that those line elements which do meet do so at node elements. The difference between this planar type and the polygonal topology type defined above is that polygons are not generated nor maintained by any process using it. The advantages of this type are that more and simpler editing capabilities can be provided when polygons are not present requiring maintenance, splitting, and so on. Also, editing this type of topology is faster, it saves faster, and the object is also smaller, as polygons are not being created. A hypsography (contour) or hydrology map are examples of uses for this topology type. It is also a convenient topology for materials which require Z values attached to the vertices in the lines.

3) Network Topology. This topology type allows line elements to cross each other, but the ends of the lines must have node elements. No polygons are generated or maintained by a process which is operating on this topology. This type is useful for routing and other network analysis applications. Road or utility maps are examples of uses of this topology type.

4) No-Topology. This topology is only being provided for backward compatibility with older 3D vector objects. CAD objects are specifically designed for situations where no-topology at all is needed as yet and where intersections are ignored. CAD objects should be used for this kind of "spaghetti geodata".

Combining Topologies. The selected vector topology is now maintained for 3D vector objects as well. This allows all the vector processes to operate on any vector object, regardless of its coordinates or topology type. For vector operations that combine vector objects--for example vector combinations and vector merge--the topology and coordinate types are automatically promoted to match the highest level of the inputs involved. For example, if you merge two vector objects where the first one is a 3D network object and the second is a 2D planar object, the resulting output object will be a 3D planar object. This occurs since a planar object is a higher topology type than a network object. The coordinate type will also be 3D since a 3D object is higher up on the coordinate scale. It is important to understand that if this upward reconciliation of topology were not done automatically, some information contained in the topology of one of the input vectors would be lost.

Element ID Tables.

You can now attach your attribute tables to the new Element ID tables. This feature is in response to some people using the Internal Element table and Element Number field as an Element ID field and expecting the vector processes to maintain the number through a vector operation. This was not a viable solution due to the algorithms used in the vector processes which can and usually will renumber elements. As an example, the validate operation performed by most vector processes will renumber the polygons. The new Element ID table attachments will be maintained by the vector processes just like any other database table.

Element ID tables are generated automatically as standard tables for vector point, line, and polygon elements. The defined fields for the point element ID table have one entry in them and the line and polygon element ID tables each have two. The line and polygon ID tables are for the "Original" element ID and the "Current" element ID. The current element ID will differ from the original element ID if the line or polygon is split in a vector process. The initial element ID values can be set in the object editor under Layer/Properties… dialog and in the New Object Values dialog when creating a vector object.

Node Attributes.

Node elements can now have attributes assigned to them. The node elements are attached to tables in the point database. Therefore, there is not a separate node database object. This allows nodes to be drawn using all of the selection and style information as points. In fact, such node elements are now treated as point elements for the purposes of display.

If all of the lines attached to a node are removed through some operation, the node, if it has an attachment to a point record, will become a point. If a line is snapped to a point, that point will become a node with the database attachment being the Point Element ID field. Attaching database information to nodes will allow more specialized tables to be added to nodes for other line-node topology problems, for instance, routing and network analysis.

Display Spatial Data.

General.

* View-in-View. A "view-in-view" comparison tool has been added. It permits you to define, move, and resize a rectangular box in a view and show different layers inside versus outside the box. For example, data of different resolutions, dates, and so on may be compared. Complete control of which layers appear inside and outside is provided in the layer control panel. This is a very useful tool, so try it once.

DataTips. Many additional controls have been added for DataTips. It is now possible to set a "prefix" and "suffix" for each DataTip. For example, the prefix could be set to the field name and the suffix set to the units. Control over the number of decimal places and the units used is also provided.

Simultaneous display of DataTips for multiple layers is now possible. Each is shown as a separate line in a single DataTip. The simplest example is to show the RGB values of each cell in a color image displayed as separate RGB layers. Due to the potential confusion of showing multiple DataTips when many layers are viewed, you may want to turn this feature on and off as needed. It is also possible to turn all DataTips off without having to turn them off for each individual layer.

Scrolling. A view showing a GPS cursor will optionally auto-pan to keep its position within the view. A "halo" is drawn around the GPS cursor so that it is easier to see over complex backgrounds. GPS coordinates are displayed in the position report at the bottom of the screen.

Zooming. There are now options to zoom to the "extents" of the "active" as well as all "selected" elements for any layer.

Ten previous view settings are remembered, so you can back up 10 times after zooming or scrolling a view.

Groups and Layouts. The size and position of all open views is now saved with groups and layouts. When the group or layout is re-opened, the views will be restored to their previous locations. Previously-saved "groups" may now be added to the current layout. There is now a render-to-raster option for groups. There are options to remove all groups in a layout and remove all layers in a group. You can now specify to redraw only a single group in a layout to save time in altering a layout.

Miscellaneous. Options have been provided to highlight all "attached", "unattached", or "multiply-attached" elements in a given table. This is useful when assigning attributes and checking for consistent assignment.

You can now control the orientation of the coordinates shown in a map grid layer. There is also an option to show the "seconds" for latitude-longitude coordinates even if the value is zero. (for example, N 32 00' 00").

The thickness of the lines around scale-bars can be set.

The Open icon and menu selection provide an option which will open any saved layout, group, or 3D simulation.

When you save a snapshot of a view, its georeference is also saved for later use.

The measurement tools have an option to show extent information for each layer, for only the active layer, or to turn off the information entirely.

A View/Close option is available on the menu for each view. This replaces the inconvenient action of clicking the delete icon on the title-bar of the view.

The number of selected elements of each type is now shown in the layer "details" line.

 

Note: Now that you are familiar with and routinely using the new Layer Control panel, please submit your suggestions for its improvements in writing.

 

Raster Layers.

* Transparency. An overall transparency value may now be set for each raster layer. In addition, the transparency of each cell value may be set through the color palette editor.

An 8-bit mask raster object can now be selected to control transparency/opacity of all the cells in the raster layer shown in the view. A mask cell with a 0 value is 100% transparent, and a cell with a value of 255 is 100% opaque. The color plate attached entitled 3D Simulator for Animated Visualization illustrates an area of Crow Butte flooded with transparent blue water whose transparency is set as a function of the depth of the water. To accomplish this, a blue raster was overlaid and masked with an 8-bit version of the elevation raster. To create this flood mask, the elevation raster object was inverted (subtracted from 255). This new raster object was then modified so that the elevation values above the level of the flood were set to 0 = 100% transparent. When applied as a mask, the "unflooded" areas appear as normal. The areas "flooded" are now blue with varying opacity with depth.

Miscellaneous. Colormaps can now be used with 16-bit unsigned rasters having up to 65535 colors. The color palette may be edited from the raster layer display controls dialog.

Raster combinations of CMY and CMYK are available. There is also a mode setting for the RGBI combinations which determines how the intensity component is combined. Available modes are HIS, HBS, Brovey, and Average.

The region selection dialog now has the ability to "update" any open raster histogram views to show the histogram only within the currently selected region.

The "raw raster cell value" information may now be recorded as a text file.

Vector Layers.

* The ability to optimize label placement for point elements previously demonstrated via CartoScript in V5.80 is now integrated into vector layer display.

Dangling nodes may now be assigned a separate color from non-dangling nodes. This is useful in editing to visually locate possible errors for correction.

A style script (formerly called a "Style by Query") can be used to set transparency for polygon filling.

Color Palette Editor.

The dialog has been redesigned using tabbed pages to reduce technical clutter.

* Palettes having up to 65535 colors for 16-bit rasters can now be created and edited for use in connection with the new 16-bit colormaps.

Multiple palette entries can be selected for simultaneous editing. There is also a button which will select all "similar" colors to the currently-selected color.

Color control can be in RGB, HIS, HBS, CMY, and CMYK modes. The transparency percentage can be specified for each palette entry. A random color palette generation option has been added.

Single Color Selection/Editing Dialog.

The dialog has been redesigned using tabbed panels to reduce technical clutter.

Color control can be in CMY or CMYK modes in addition to all the other color modes supported. You can directly enter the color value as a percentage or as a number from 0-255 or select your color from a standard palette.

Modifications since V5.90 CDs.

* Selecting Objects. The Object Selection Dialog now presents a simpler appearance by using icon buttons instead of text buttons for specifying optional actions on the object(s) selected. The following actions are still controlled by text buttons: "OK", "Skip", "Cancel", and "Help".

An "Info" icon button has been added to display the same information about an object as in the Project File Maintenance dialog. This should prove to be very helpful in identifying your objects and checking their characteristics before using them.

A "Refresh" icon button has been added to force a re-read of the list of files. This is useful if you have changed the media, such as inserting a different CD-ROM.

Selecting All Objects. There is now a very useful "Add All" icon button available in the multi-object selection mode. This powerful "Add All" option has different behavior depending on whether a list of Project Files or objects is being shown. Within a single Project File, "Add All" will add ALL of the usable objects in that file to the list of selected objects. An example use of this would be to select all the 200+ spectral bands in a hyperspectral image Project File where each individual spectral band is, or appears to be, a separate object.

Outside of a Project File, the "Add All" option will add ALL of the usable objects in ALL of the files in the current directory to the list of selected objects. This could be a very large number of objects for use in mosaic or other processes. For example, put all the Project Files containing orthophotos of a county into a single directory. Then use Add All to select them all for immediate tiling into the display.

When you are in the multi-object selection mode, there is also a "Remove All" icon button. The "Remove All" button will clear the list of all selected objects. Sometimes when you have selected hundreds of objects, it will be easy to start over.

Future. The next scheduled changes for object selection will be to improve how you navigate from the drive and directory level to the object level in the Project File (maybe for V6.00).

* 3D Simulations (new prototype process).

Introduction.

Over the past years, there have been periodic requests for TNTmips to render moving 3D simulations. This kind of feature has been easier for our competitors to implement early, as they started with raster and DEM images only. Meanwhile, MicroImages was preoccupied with the implementation of a fully featured and complex integration of GIS, IPS, and so on into a geospatial analysis system. Now that TNTmips is a fully functional geospatial analysis system, a prototype of a geospatial 3D simulation process has been created.

Controls.

Layers. The simulation process has been structured to handle all the kinds of geodata layers (raster, vector, CAD, TIN, pins, ...) you are familiar with using freely in your visualizations within the TNT products. All the layer types supported in 3D groups may be combined into your simulations.

Path Creation. You create a path in this first release in a separate 2D view of some or all of the layers you have selected. You can draw in this view and use the Path dialog box to set a speed, height above the surface (fixed or constant), pitch angle, and so on.

Dials and Knobs. A simulation may be previewed in either wireframe or solid view mode. At any time during the preview, you can control the speed, play forward or backward, and pause the simulation.

Realism. Typically, anyone who tries the simulator for the first time selects the area of a large area 2D image (for example, a DOQ of 7 by 10 miles), sets the speed for the path to be that of a car (for example, 60 miles per hour), and the frame rate to be the default 24 per second, and then draws a zigzag path of 10 miles on the 2D image. The result is that in the simulation, you seem to creep across the landscape. After a few minutes, you remember how long it would take you to drive the same path of five miles at 60 miles per hour which is what you told the simulator to do as well. And, if you should happen to try to create a movie of this, expect it to be a huge MPEG file which will still be computing when you come back in the morning. One result of this is that the default speed was quickly changed from 60 miles per hour to 600 miles per hour.

Creating Movies. Simulations may be computed in slow-time and recorded as MPEG movies to be played back in real-time. The variables controlling how long it takes to compute your movie will be the complexity of your view, the size selected for the movie frame, and the frame rate selected. If you are computing an MPEG movie on a Pentium 133 of several minutes' duration with complex inputs, a frame rate of 24 per second, and a frame size of 1/4 of your screen, then plan to run it overnight. The two sample MPEG movies provided with V5.90 took about an hour each on a 266 MHz Pentium II computer. If you are looking for a justification for a new 400 MHz machine, this is it.

More. A color plate is attached entitled 3D Simulator for Animated Visualization which illustrates some of the advanced features provided by using geospatial layers as input. This color plate shows how the new 256 level transparency mask can be used in a simulation as could the many other layer control and visualization features in TNTmips.

What are MPEG Movies?

MPEG is a compressed file format (*.mpg) used throughout the industry as a standard in which to distribute and play back real or simulated movies. Two MPEG movie snippets prepared in V5.90 are included on your CD. Due to space limitations, only one is included on the "A" CD and is automatically installed as part of the TNTlite geodata as file bigsf.mpg. It provides a simulated flight north across the San Francisco Bay prepared from the sample SPOT and USGS DEM data distributed on the last Prototype 3 TNTatlas. In addition, a simple database was added containing a few records used for the pin mapping of the fictitious sail boats, radio towers, and marina flag poles which zip by as "hazards to navigation". The other MPEG movie snippet renders a DEM draped with a Landsat TM image of Mount Saint Helens, a contour map computed from the DEM, and some sample roads. Both movies can be found on your "B" CD as files mtstheln.mpg and bigsf.mpg.

Microsoft Internet Explorer 4.x and Netscape Navigator 4.x both contain "virtual tape decks" which will play back your *.mpg files. There are also many free MPEG players available on the Internet and elsewhere. Some give you a variety of flexible controls over the playback such as controlling frame rates. Most do not. When an MPEG movie is computed in TNTmips at 24 frames per second it has all the information needed to reconstruct all frames. However, you may find that during playback you are periodically jerked forward to catch up with real-time. While there are many MPEG viewers available, some are better at playback than others, so try several to find one that works well on your computer. Then, if you are still getting jerky movie playback, especially using our sample movies, you are going to want to think about the need for a faster computer and display board. On MicroImages' standard, stock 133 Pentium computers and display boards (using Explorer's Active Movie) and an Apple Mac G3 (using QuickTime), the two sample movies provided playback without any jerking forward to catch up in time.

There is one source of jerking in V5.90 of the Simulator which is not related to your platform's slow file reading or decompression computations. If you draw a path with angles in it, your simulation will jerk you around them in a single frame. You can avoid this by drawing a curved path in XY. Also remember, if you seem to be jerked up and down, you may have chosen to use a constant height above the surface, rather than a fixed height. Thus, if the surface has mountains to pass over, the peaks will cause an angle in the Z axis of the path which will also jerk your view abruptly up and down. MicroImages will be adding features to smooth and control these abrupt changes in paths. For example, paths can be splined in the XY view to remove corners. The new profile editor described below will be used to let you view, spline, and edit your path above the surface in the Z dimension. This profile view and editor can also be used to make sure your path does not crash into the terrain if you are flying or does penetrate a water surface at the right place if you are diving.

 

Keeping Time.

TNT geospatial simulations prepared on general personal computers can operate in three time frames: real-time, slow-time, and some-time. Which time you experience depends upon many factors related to the capacities of your platform (processor, drive, bus speed, ...), the complexity of your simulation (view size, layers, length, ...), and the efficiency in the TNT simulation process.

Real-time. Your real time simulations are currently limited to wireframe approximations or slow, jump-forward frame rates to heip you in designing your simulation to be computed later in slow-time. Use this real-time approach to check that objects are moving through the base object in a realistic fashion. Movies you have computed in slow-time provide your real-time simulations, but alas, they are fixed and not interactive.

Slow-time. This is the time it will take your particular personal computer to render an MPEG representation or complete frame from the original objects you select for layers. You can select a complex set of layers in their original extents, map projections, database pin mapping with complex symbols, line features with styles, and so on. The 3D Simulation permits this kind of easy set-up just as in other TNT visualization processes in contrast to other competing simulation products in which inputs have to match in extent, projections, ... before being input. However, the price you pay is that computation cycles are wantonly absorbed, and the slow-time required to prepare your simulation will increase.

Some-time. The next task for MicroImages is to provide the flexibility in TNTmips, TNTview, and TNTatlas to handle various layers of geodata created as independent objects, yet compromise to create rendering in some-time (this some-time will become real-time when you have a computer fast enough to make it so). This will be accomplished by allowing you all the current flexibility to set up your 3D simulation and check at slow frame rate. Then a new procedure will generate two new matched 16-bit rasters containing the composite of all the materials to be draped and the 3D surface (for example, elevation) upon which it will be draped. Using these two precomputed composite rasters in a simulation avoids repeating conversion, resampling, table searches, and many other computations for every single frame in your movie. As a result, a simulation using these two layers will operate in some-time, which will approach real-time as our computers improve.

Generating the surface and drape rasters will take only minutes, as the layer combination computations will only be completed once, not 24 times for every second of final movie or for interactive viewing. It is then anticipated that on the latest 400 MHz Pentium based desktop platform, this some-time approximate simulation (in other words, using the two layers) will be fast enough to closely represent the appearance of the final movie. This will allow you to interactively control and design your path with a mouse or a joystick. Paths determined in this interactive fashion will be recorded and used later to generate full speed, high resolution movies in a slow-time batch mode from the two layer model or from all the original layers.

Future.

Many new features have occurred to MicroImages and will be apparent to you as soon as you try this process. The first priorities in improving the Simulator are to provide the ability to precompute the two composite quick-time viewing layers described above and to provide a means to smooth paths in XYZ.

GPS inputs.

GPS support in the TNT products is now even more important since SML and TNTsdk are both supported by TNTview. As a result, some of you and MicroImages have logged a detailed list of new features needed in this area. The data logging SML script and field sketching scripts described below in connection with SML are dependent upon the expanding GPS capabilities. A few minor modifications were included in V5.90, but expanded functionality is available now.

V5.90 drastically reduces the time it takes for a view to respond and plot the current position of the GPS. Your current position shown by the GPS cursor now tracks your one second GPS fixes in real-time. It will also display the speed of your change in position.

* Modifications since V5.90 CDs.

Introduction. A GPS device is a "real-time" source of position coordinates when it is connected to the computer's serial port. Please note that when using inexpensive GPS equipment, the software external to that device, such as the TNT products, has very limited control of how and when the device reports positions.

As expected, each manufacturer of GPS equipment seems to have their own idea of the protocol and contents of the data stream sent out by their devices. Each of these has its advantages from that manufacturer's viewpoint in the expected applications of their equipment. Furthermore, some manufacturers, such as Garmin, want to control all the applications of their special features and therefore license out, charge for, and/or otherwise control access to their protocol and its format. In the longer run, this will not work for general purpose units, but these manufacturers will need to learn this the hard way in the market place.

Fortunately, there is also a standard protocol for how a GPS device reports information, and it is supported by most general purpose GPS equipment regardless of cost. MicroImages supports this common NMEA 0183 (National Marine Electronics Agency) standard and the "Trimble ASCII" protocol.

GPS Log Files. TNT products now support a prerecorded source of coordinates called the GPS Log File. These files can be created while reading from a GPS device, as well as by editing or other manual methods. Log files can also be used as virtual GPS devices to simulate GPS input where a real GPS device is not available. You have extensive control over the playback of a GPS log file acting as a virtual GPS device including playback speed, interpolation of positions between position entries, rewinding and replay, and so on.

GPS log files store coordinate positions and associated information in a simple comma-separated-value text file. It is thus a simple matter for you to create virtual GPS log files using a text editor or your own programs. MicroImages will support specific manufacturer's log file formats if you can supply their documentation.

Multiple GPS Sources. An active GPS source is any directly read GPS device or GPS log file from which positional information is being requested. Any or all active GPS devices (real or logs) can be selected to display cursors in a view, accessed via an SML script, used in graphical editing, and so on. The TNT products now support active concurrent input from multiple GPS sources which can be a mixture of GPS devices and GPS log files. GPS devices cannot be active in two concurrent processes due to operating system limitations.

A simple use of several active sources would be to attach and access three GPS devices, all of which are being concurrently displayed as cursors. One of the active sources can be designated to control the view, its scrolling, and related changes. An alternative might be to continually rescale the view window to maintain all cursors in the view regardless of their geo-separation. An example of a more complex application of multiple GPS sources would be where several vehicles are sending in their positional information by cellular phones, the position of the vehicle containing TNTview is also moving and producing a GPS input, and the driver is attempting to follow the route across the field created in advance or by a GPS route log created by a previous vehicle (robots anyone?).

GPS Menu. There is a new "GPS" menu on all views that support GPS position reporting. This menu currently provides options to:

• select which GPS source(s) to display positions for

• set up a new GPS device (in other words, activate it)

• open a GPS log file

• toggle auto-scrolling on/off

• select the units for reporting GPS locations, speed, ...

GPS Setup. A GPS device must be set up and configured before it can be used. A dialog allowing a new GPS device (real or logged) to be configured is available from anywhere a GPS source can be selected. There is no limit to the number of GPS devices which may be set up on a single system. As noted, an option on the GPS menu is also available to set up a device.

Automatic GPS Connection. Starting a TNT process like Display, which requires a GPS input, will make a single attempt to connect to the default GPS devices, if any. If the connection is successful, the GPS location will be automatically displayed in all 2D group and layout views if the reported position is within the extents of the object(s) viewed. You will not need to press a "GPS" button to turn on GPS position reporting unless multiple devices and/or log playback is being used.

GPS Status and Control Dialog. A status and control dialog is available for each active GPS source. Status information displayed is position, speed, heading, accuracy, number of satellites, and so on. Not all this information may be available for specific equipment or log files, as they may not contain sufficient information to compute it.

Controls are available for selecting the symbol style, size, color, and so on for representing the current position of each active GPS source in all views.

If the source is a GPS log file, this dialog will provide the ability to change the playback speed, rewind the log, close the log, and so on.

As mentioned earlier, if the source is a real GPS unit, it acts as a dumb device, as little control of it is supplied by the NMEA protocol (in other words, TNT products cannot send control information to the device). Thus, this dialog can only provide limited control options such as the ability to close it as an active source.

 

CAD to Vector Conversion.

The CAD to vector conversion process requires that you specify which topology type to use in the vector object created. These vector topology types are polygonal, planar, and network, and are introduced and defined above.

Warping.

A default cell size is provided based on the cell size of the input raster.

Raster to Vector Boundary Conversion.

The output vector object will be created in "implied" map coordinates if the input raster is georeferenced, instead of in raster coordinates. The implied map coordinates are generally more convenient when using the object in subsequent processes.

Raster Correlation Histogram.

You can now set the highlight radius for the "dancing pixels" tool. The "form" of the regression line equation can be selected. Raster values and/or cell attributes can be viewed as DataTips. The histogram can be saved to a database table or text file. All raster types are supported. The axes are now labeled.

Georeferencing.

Scale and position locking options have been added. The option to "open" an RGB raster set has been reintroduced. This feature was omitted from V5.80.

Import/Export.

You can now designate raster values in the source file which will then be converted to null values in the raster object created.

CMYK TIFF.

CMYK TIFF files can now be imported and converted to RGB objects during import.

Clementine Lunar Probe Image Import.

The multispectral images collected by the Clementine lunar probe can now be imported into raster objects. These images are all of smaller size so that they are usable in TNTlite.

ERMapper Export Modifications.

The export process now exports the georeference information.

SDTS Raster Import.

The 16-bit DEM rasters posted on the network by USGS in Spatial Data Transfer Standard format can now be imported. There are other raster structures which are possible in the SDTS format which are not supported. If you acquire rasters in these forms, let us know and supply a sample SDTS file.

SDTS Vector Export.

A vector object can be exported to SDTS.

AVIRIS 94 Import.

Airborne Visible and Infrared Imaging Spectrometer imagery collected by NASA/AMES in 1994 (and possibly other years) can be imported. This AVIRIS hyperspectral image format can be pixels-interleaved-by-band, lines-interleaved-by-band, or band-sequential.

AVIRIS 97 Import.

Airborne Visible and Infrared Imaging Spectrometer imagery collected by NASA/AMES in 1997 & 98 (and possibly other years) can be imported. This AVIRIS hyperspectral image format is pixels-interleaved-by-band.

DEM Import Modifications.

TNTlite users can now import a limited portion of the USGS DEM into a raster object.

3D Vector Imports.

TNTlite users can import a limited portion of external 3D vector formats into vector objects. Importing only a part of one of these external formats was not previously possible, as the code for clipping did not support 3D materials. The addition of 3D vector object topology has made it possible to enable area selection during import.

CCRS Landsat.

The Landsat MSS and TM images distributed by CCRS (Canadian Centre for Remote Sensing) can be imported.

Modifications Since V5.90 CDs.

SDTS Attribute Export. The attributes in a vector object can be exported to SDTS.

ArcBIL/BIP Export. ESRI's ArcBIL/BIP raster format can be exported.

* Supervised Classification.

Training Set Editor.

The automatic image classification process now has a completely new training set editor. It can create new training sets or edit those produced in Feature Mapping or other processes. Its simplest use is to manually create or edit training sets by drawing over one or more reference images or maps. Two color plates are attached entitled Create Classification Training Data and More Training Set Editor Features. They illustrate the features and operation of this new process.

Raster Based. The training set editor creates an unsigned 4-, 8-, or 16-bit raster object overlaying the multispectral/multitemporal images to be classified. Each cell in this raster object, by its value defines if it is a member of a training class or not, and if yes, which class. Using this approach, you can create all sizes and kinds of training sets. The cells defining each class can be in polygons, circles, a regular grid or random array of single cells, and so on. This overlay raster object can also be created by any other suitable procedure in TNTmips. This editor can also create the training set raster from:

• other raster types

• selections of polygons from vector objects

• pin mapping points from tables using a field for a radius

• using point elements with a radius from a vector object

As is usual, the ability to select point and polygon elements by query, use computed fields, and so on is supported to make this a flexible procedure.

You may already have a vector object created elsewhere containing points or polygons to be selected as training sets. This new editor will allow you to access these objects, overlay them, and create a training set raster. The location and attributes of the vector elements can then be transferred to the raster. For example, the specific points selected by query from the vector object or by pinmapping a table can be inserted as circles into the training set raster. The diameter of these circles can be controlled directly within the query based upon some attribute of the point. The identification of these circular training sets can also come from the attributes.

Using an existing map in a vector object to provide training sets via queries is obvious. However, there have been several requests for the point training set approach. There was a list of environmental, ecological, limnological, and oceanologic applications for creating training sets where no polygon could be drawn in the field, but the condition to be mapped could be identified as surrounding GPS derived sample points. For example, a request was filed to map the depth to coral reefs using depth sounding from a boat. Several of you have also requested this point sample training so that common crop conditions encountered during scouting in a particular crop in multiple fields could be used to build up a signature for that condition and crop on that date.

Training on Points. The new SML based GPS/image based datalogger described below is a good way to collect point training set samples "in-situ" with associated descriptive records. One of the parameters in the record logged could be the size of each training area identified when you stand within the middle of each of them in agricultural fields of a specific crop (for example, the diameter of the patches of weeds, bare soil, water logging, and salt logging). Other database fields in the record are logged to contain the GPS position and the identification of the patches and conditions. These tables can be immediately converted in TNTmips to point elements in a vector object. The training set editor is then used to create a training set raster as described above. Only agricultural fields of that crop on that date would be input to the classifier using a crop map mask object. The supervised classifier can then be used to map other areas of the same conditions in fields of that specific crop.

Masking Area Classified. The Training Set editor will also allow you to create or load a binary raster object to use to mask the area of the multispectral/multitemporal images to be classified. You can use the editor to convert polygons in an existing vector object into a mask or to manually draw the areas to be masked over a reference image. In the above agricultural crop application, this mask could be used to limit the area classified to all fields of a specific crop type. Overall, supervised classification results will be much more meaningful if you limit them in the first place by a mask to a specific problem rather than trying to map your desired classes from everything in the world. For example, use a mask to define all fields of a crop type whose variability and conditions are of interest (farmers already know the kind of crop they planted); to subdivide all forested lands into meaningful categories; map sediment concentrations in water areas; or measure depth to coral reefs only where a mask outlines the area of the coral reefs.

Training Set Table. The training set editor creates or opens an existing database table with records linked with each cell value in the training set raster. This table contains, for each cell value (in other words, training class), the cell value and associated class name and color. When you use a training set raster created by other means (pins, vector, and so on), you will need to use the features in the editor to create the entries in this table to define your classes.

Editing. The Training Set Editor dialog box provides all the controls for organizing, labeling, grouping, deleting, ... cells in the training set raster object into meaningful classes whether created in the editor or elsewhere. Overall, it provides information on the number of classes desired, the number which have been set up (in other words, trained), the number of classes selected, and the number of classes being combined. Using this editor you can:

• change color and descriptive name for each class

• add a new class or several classes

• select and delete a set of classes

• change the cell value used to identify the class type

• select and unselect classes via a tabular list, view window, scatterplot view, or dendrogram window

• draw a polygon and select, unselect, and revert selection for all enclosed classes

• assign all cells or unclassified cells inside a drawn polygon to one selected class except for masked cells

• release all cells or cells for selected classes inside a drawn polygon from their class assignment

• mask or unmask all cells inside a polygon

A tabular portion of this dialog shows and allows you to select and edit for each current class the current cell value, color, name, and descriptive name.

Apriori Probabilities.

You can now specify an apriori probability for the occurrence of each class in the maximum likelihood supervised classification method.

Unclassified Cells.

The Maximum Likelihood classification method allows you to set a minimum likelihood percentage threshold. Once the classifier has determined the maximum class assignment probability for a cell, that value is compared to the threshold. If the probability exceeds the threshold, the cell is assigned to the identified class. If not, it is set to "Unclassified" (0 value in the class raster). By varying the minimum likelihood percentage, you can control the quality of the class assignments and exclude cells with only a low probability of matching any of the defined classes.

Raster Properties.

Point properties - raster properties can be computed for vector points (database table includes two fields: PointID and Raster Cell value) using category raster instead of source vector.

Databases.

Oracle.

You can now remotely link to and import an Oracle table if you have permission to do so. When linking, you have the option of keeping the password stored in the link so you never have to enter it again or not keeping it. If you don't store the password with the link, you'll be prompted for it whenever you actually access the table. Note that for security reasons, your organization and Oracle setup may not permit you to store your password and other access information in an external software package. In fact, you may have to get permission to access your Oracle from any external software. Remember, in large part it is database security at many levels that an enterprise database like Oracle is really all about. Microsoft SQL Server is trying to introduce more advanced levels of security. Microsoft Access is designed for easier small organization or personal use where such security is an impediment to efficient operation.

* HyperSpectral (new prototype process).

Introduction.

Why Now? Some of you were told as recently as six months ago that hyperspectral analysis would not be added to TNTmips until there were additional public sources of hyperspectral imagery. Then the launch of the experimental Lewis hyperspectral imaging satellite became imminent, and MicroImages used it as the basis to plan expansion of TNTmips into this area. Unfortunately, Lewis failed, and wider availability of public hyperspectral images is still in the future. Fortunately, MicroImages did not fail, and TNTmips 5.9 has its hyperspectral prototype process available. Now we can all learn how to use and perfect it using the limited sources of hyperspectral imagery while we await the possible successful launch of more satellite and aircraft programs.

At first this new process will be useful only by those who understand its complex application to the very limited supply of hyperspectral imagery. At present, these images are only collected by experimental aircraft devices at various test sites around the world. The most common and advanced source of public hyperspectral imagery is the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) developed at the Jet Propulsion Laboratory for NASA.

What is Hyperspectral Imagery? These new analysis procedures should be applied to images that have at least 30 spectral bands. There is at least one hyperspectral imaging device referenced on the web that has 1720 individual spectral bands! Typically, hyperspectral imagery will have from 100 to 250 individual spectral bands. Sometimes, a spectral dimensional reduction or other selection procedure will be applied to limit the analysis of this imagery to 30 to 40 important spectral bands. Which bands to select to use from the larger original number is still a matter of experimentation in the context of the particular objectives of the application.

Most hyperspectral imagery has pixels whose values are related back in some proportion to the radiance in a narrow wavelength range leaving the surface for each pixel. As a result, these values are often referred to as effective spectral radiance. This effective spectral radiance curve for each pixel is affected by many, many factors such as the composition and intensity of the incoming solar irradiance, atmospheric absorption, bidirectional angles of irradiance and observation, pixel composition, bidirectional reflectance and transmission of surface components, the image's transformations, and so on. Many schemes are applied to remove these variables to produce an effective spectral reflectance curve for each pixel.

Hyperspectral images can be analyzed in TNTmips 5.9 using hyperspectral image processing techniques or the spectral curve processing tools. A color plate is attached entitled Hyperspectral Analysis Process to outline the procedures provided in this first release of this new process. Until a new hyperspectral cube object is provided (see below), your hyperspectral imagery must be imported into a standard RVC Project File with one spectral band in each raster object. All spectral bands must be added into the Project File in ascending wavelength order. The hyperspectral import processes supplied by MicroImages for specific hyperspectral image types will insure these rules are followed. If you import or otherwise create your own proprietary hyperspectral images in a Project File, make sure they meet these requirements.

Can I Use Curve Libraries? You can create your own spectral libraries based on spectra extracted directly from the hyperspectral images. Or, you can work with standard spectral libraries provided by others or collected in-situ by such devices as spectroradiometers. For geological applications, the United States Geological Survey (USGS), Jet Propulsion Laboratories, and possibly others have provided sample mineral spectral reflectance libraries which can be obtained via the Internet.

Spectral reflectance reference libraries such as those from the USGS record spectra as reflectance, which is a physical property of the mineral samples measured. The USGS spectral libraries represent prepared samples (for example, crushing or polishing) and are measured in a spectrophotometer in a laboratory setting. The spectral reflectance of these same materials in-situ can be altered by several factors such as weathering, algae growths, moisture, … Pixel values collected by a hyperspectral imaging device represent effective spectral radiance. They must be calibrated into effective spectral reflectance in order to be comparable to the USGS and other spectral reflectance libraries. TNTmips offers several such important calibration techniques.

Reference libraries of spectral reflectance curves of vegetative surface materials are even more difficult to obtain. In this situation, the spectral reflectance of the plant materials is a bio-physical property which changes constantly with plant growth, chemical alteration, morphology, phototropism, vigor, … Field spectroradiometers are available to measure in-situ vegetation spectral radiance and spectral reflectance quickly and easily in the field. It is several orders of magnitude more difficult to collect all the plant parameters needed to relate this information to the amount, condition, and other surface factors needed if these spectral curves were to be "libraried" and used in the same fashion as mineral spectral libraries. However, many useful vegetation oriented applications of hyperspectral imagery can still be made using sophisticated image processing methodologies.

Where Can I Get Images? Several spacecraft will be launched in the next two years to collect hyperspectral imagery. The first will be NASA's MODIS system with 36 spectral bands, which should be launched by the time you read this. MODIS is a major component of the NASA EOS program. It has coarse resolution, but will collect frequent, large area images which will be readily available to the public. In May 1999, NASA will also launch, in tandem orbit with Landsat 7, a little publicized EO-1 (Earth Observation 1) satellite with two different hyperspectral imaging systems which seem to be called ALI. ALI will have 30 meter resolution and a small image coverage area.

Both the U.S. Air Force (via Warfighter on the semi-commercial OrbView-4 satellite) and the U.S. Navy (via the NEMO, a semi-commercial satellite of the Navy and Space Technology Development Corporation) will begin to collect and distribute hyperspectral images within two years. Excess capacity of both these hyperspectral satellites will be sold to the public. Finally, a consortium of mining companies and Australian government entities is proposing to launch a private hyperspectral imager called ARIES into an orbit for optimal Australian coverage. It thus appears that a stream of space based experimental hyperspectral images will become available over the next two years.

Expanded aircraft systems are also appearing for the collection of hyperspectral images, including NASA's MODIS and ASTER aircraft simulators. The most prominent commercially available hyperspectral scanner is HYMAP, which is manufactured in Australia (rumor says three have been sold). There are also several private firms who have purchased or built proprietary, one-of-a-kind hyperspectral imaging devices. These firms collect and will often process hyperspectral imagery. These will not be discussed in this MEMO, as these firms are not cooperative in providing information when they learn that free hyperspectral image analysis capabilities are to appear. There are also several other hyperspectral imaging devices of limited wavelength range and capability (AISA from SPECIM created by VTT in Finland, CASI from Canada, …). These also will not be discussed further in this MEMO.

Aircraft Hyperspectral Imagers.

AVIRIS. (Airborne Visible/Infrared Imaging Spectrometer). This is the second in a series of imaging spectrometer instruments for earth remote sensing. For more information, start at home page http://makalu.jpl.nasa.gov/aviris.html.

IFOV: 1 milliradians

Ground Resolution: 66 feet (20 meters) at 65,000 feet

Total Scan Angle: 30 degrees

Swath Width: 5.7 nmi (10.6 km) at 65,000 feet

Digitization: 12-bits

614 pixel swath by 512 pixels

224 contiguous spectral bands from .41 to 2.45 µm

HYMAP. A commercial airborne hyperspectral scanner manufactured by Integrated Spectronics, 22 Delhi Road, North Ryde, NSW 2113, Australia.

IFOV: 2.5 milliradians

FOV: 60 degrees

512 pixels swath

128 contiguous spectral bands from .44 to 2.94 µm

MODIS. Airborne Simulator (Moderate-Resolution Imaging Spectrometer). MODIS Airborne Simulator along with MODIS are designed to measure terrestrial and atmospheric processes. For more information, start at home page www.gsfc.nasa. gov/MODIS/MAS/Home.html.

IFOV: 2.5 milliradians

Ground Resolution: 163 feet (50 meters) at 65,000 feet

Total FOV: 85.92 degrees

Swath Width: 19.9 nmi (36 km) at 65,000 feet

716 pixel swath

50 spectral bands from .46 to 14.4 µm.

MASTER. (MODIS/ASTER Airborne Simulator). MASTER is the airborne simulator for the ASTER satellite. Both are to study geologic and other Earth surface properties. Flying on both high and low altitude aircraft, MASTER will be operational in 1998. For more information, start at home page http://asterweb.jpl.nasa.gov.

IFOV: 2.5 milliradians

Ground Resolution: 12 to 50 meters (variable with altitude)

Total FOV: 85.92 degrees

Swath Width: 19.9 nmi (36 km) at 65,000 feet

716 pixel swath by unknown

50 spectral bands from .46 to 13.00 µm

HYDICE. (Hyperspectral Digital Imagery Collection Experiment). This hyperspectral device was built by the Naval Research Laboratory and is operated by ERIM. The device is flown on a CV-580 aircraft. A snapshot image of bands 166, 64, and 51 sample hyperspectral data set of 2600 by 800 meters of urban area of Ft. Hood, Texas is located at http://www/ai/sri/com/~apgd/vI/satasets/Hood/HYDICE_dist.html. A paper entitled Automated Population of Geospatial Databases (APGD) HYDICE Test Dataset I with more details on this sample image set and device can be found at http://www/cs/cmu/edu/~MAPSLab and will lead you to additional references.

IFOV: 1.0 milliradians

Ground Resolution: .75 to 3.75 meters

Total FOV: 18.33 degrees across track

Swath Width: 240 meters to 1.2 kilometers

320 pixel swath by continuous (16 bits per pixel)

210 spectral bands from .40 to 2.50 µm of nominal 10 nm bandwidth

SFSI. (SWIR Full Spectrum Imager). SFSI was designed and developed at the Canada Centre for Remote Sensing to provide remote sensing researchers with both high spectral and high spatial resolution SWIR imagery for developing methodology and promoting applications in this spectral region. In the design of the instrument, the spectral range 1.22 to 2.42 µm was selected to include the region 2.10 to 2.40 µm which is of special interest for mineral identification. This device or its progeny is now available for contract flying only by Borstad Associates. Sample imagery was reported flown at 3000 meters using an Aero Commander. More information can be found at http://www.borstad.com/sfsi.html.

IFOV: .33 milliradians across track by 1.0 milliradians

Total FOV: 9.4 degrees across track

480 by 496 pixel image (13 bits per pixel, 8 bits used selected for recording)

120 spectral bands from 1.22 to 2.42 µm of nominal 10 nm bandwidth

Others. A list hyperspectral "like" aircraft and spacecraft sensors can be found at http://www.neonet.nl/itc/~bakker/is_list.html. It contains the name, manufacturer, operator, number of bands, and spectral coverage of 44 devices.

Spaceborne Hyperspectral Imagers.

ALI. (Advanced Land Imager). The ALI is an experimental push broom spectrometer designed to test components for a possible Landsat 7 follow-on instrument. It is to be launched in May 1999 as part of the EO-1 (Earth Observation) satellite. It has two different hyperspectral imagers and another Landsat type imager.

Ground Resolution: 30 meters

315 spectral bands from .40 to 2.50 µm

ASTER. (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Only 14 spectral bands, and therefore a source of hyperspectral imagery. Mentioned here only in reference to its MASTER Airborne Simulator which has more spectral bands. It is to be launched soon. For more information, start at home page http://asterweb. jpl.nasa.gov.

MODIS. (MODerate-resolution Imaging Spectroradiometer). MODIS is an EOS family instrument designed to measure biological and physical processes on a global basis every one to two days. It is to be launched this month, June 1998, on the NASA EOS-AM1 platform. It has a marginal number of spectral bands to be considered hyperspectral, but it is such a big program, it will provide a lot of "hyperspectral" images. For more information start at home page http://www.rsi.ca/.

Ground Resolution: 250 meter, 500 meter, or 1 kilometer

Swath Width: 2300 kilometers

36 spectral bands from .40 to 14.50 µm

OrbView-4. This is a commercial satellite with high resolution panchromatic, multispectral, and hyperspectral imaging systems being built by ORBIMAGE. The hyperspectral sensor is being subsidized by the U.S. Air Force as part of its Warfighter program. This subsidy appears to be by the guaranteed purchase of a certain number of images (rumor has it as 800). The sensor and its images are not classified, and ORBVIEW can sell any additional hyperspectral images which can be collected.

Ground Resolution: 8 meters

Image Area: 100 Km2

unknown number of spectral bands (but many) from .40 to 2.5 µm

NEMO. (Navy Earth Map Observer). The U.S. Navy and the Navy and Space Technology Development Corp. (in Arlington, VA) will jointly build and operate a new mapping satellite with a hyperspectral imager and a five meter panchromatic imager. The hyperspectral sensor will be built by an unspecified company. The press release indicates that "the Navy is interested in coastal area details such as water depth, bottom topography, hazards, temperature and clarity. This information is increasingly important due to the Navy's switch in emphasis from deep-water to shallow-water operations supporting ground troops, … The Space Technology Development Corporation will set up a spin-off company dubbed EarthMap to market NEMO imagery commercially for things like crop and resource mapping."

ARIES. (Australian Resource Information and Environment Satellite). This is a proposed hyperspectral satellite which might be constructed for private mineral exploration in Australia. It is being studied for feasibility by a consortium made up of CSIRO, Auspace, the Australian Center for Remote Sensing (ACRES) with support from Earth Resources Mapping, GeoImage, and Technical and Field Surveys. This $1.2 million feasibility study is being financed by the Australian government and other interested resource industries. Results of this study should be known in 1998.

Others. A list hyperspectral "like" aircraft and spacecraft sensors can be found at http://www.neonet.nl/itc/~bakker/is_list.html. It contains the name, manufacturer, operator, number of bands, and spectral coverage of 44 devices.

Components of Process.

TNTmips 5.9 provides all the basic steps required to support hyperspectral analysis.

These include:

• a variety of methods to analyze spectra and images

• the ability to import AVIRIS hyperspectral images

• sample AVIRIS hyperspectral images in RVC format (by special request)

• direct access to the USGS spectral curve library

However, these activities have now defined several additional supporting steps and materials that will be provided in V6.00:

 

• a more efficient object type for storing a hyperspectral image in RVC

• special graphical tools for inspection and qualitative interpretation

• tutorial materials beginning with a Getting Started booklet

Terminology.

Wavelength Units. The wavelength units used throughout the hyperspectral process are micrometers represented by µm.

Spectral Irradiance. The radiant energy density falling on a surface as a function of wavelength. Measured in units of watts/meter2/steradian/micron.

Solar Spectral Irradiance. The energy density of solar radiation falling upon a surface as a function of wavelength. Often used as simply Spectral Irradiance when no confusion will result. Measured in units of watts/meter2/steradian/micron.

Spectral Radiance. The energy density which has been reflected from a surface as a function of wavelength. This curve can be measured close to the surface by a spectroradiometer or remotely sensed with an aircraft or satellite hyperspectral scanner. Measured in units of watts/meter2/steradian/micron.

Spectral Reflectance. The physical or biophysical property of the surface as a function of wavelength which is the ratio on a matching wavelength basis of the spectral radiance leaving the surface divided by the spectral irradiance reaching the surface. Think of it as the potential at each wavelength to return the energy falling on that surface without absorbing or transmitting it. This curve is often obtained by measuring the spectral radiance curve from the surface and dividing it by a second spectral radiance curve measured nearly concurrently from a material of known or uniformly high spectral reflectance (lead oxide, barium sulfate, …). Be sure to understand that this is a property of the surface and not of the incoming or outgoing radiation (think of it this way: a red surface is red in the sun or the shadows). A unitless value.

Spectroradiometer. The equipment used to measure the spectral radiance of a material in a field or in-situ setting is called a spectroradiometer. It takes time to measure and record the in-situ spectral radiance of a large collection of materials. Thus the incoming solar spectral irradiance can change drastically as a function of time. It is thus a common field practice to concurrently measure the spectral radiance of a reference panel with a high spectral reflectance in the wavelength intervals of interest (magnesium oxide, barium sulfate paint, special canvas, …). The spectral curve for the material is then divided by the spectral curve for the reference panel in each wavelength increment to yield a spectral reflectance for the material. Some implementations of spectroradiometers have built in schemes to collect and divide by a reference spectral irradiance, and thus can record a spectral reflectance directly. It measures in units of watts/meter2/steradian/micron.

Spectral radiance curves for the components of a real or hypothetical scene can be assembled into a library. These libraries can then be used to experiment with a collection or mix of materials to determine how they can be separated with hyperspectral analysis. These spectral curve libraries can also be used for reference in the curve comparison methods built into several analysis procedures applied to hyperspectral images.

Apparent Spectral Radiance. The wavelength varying data values recorded for each channel of an aircraft or spacecraft hyperspectral device are commonly referred to as apparent spectral radiance. The word "apparent" is used here to mean "proportional to". It is these relative values as a function of wavelength for each cell in an image that are important for hyperspectral analyses. The actual digital values are unitless and can vary greatly for a specific surface according to the amount of solar spectral irradiance (for example, time of day, haze, detector sensitivity, and many more factors). Some hyperspectral analysis procedures use these wavelength varying digital values of known materials in the hyperspectral image as "signatures" to mathematically locate other cells or parts of cells in the image which contain the same materials.

Spectral Character. This term refers to how a spectral curve (in other words, its representative data values) varies relatively from wavelength to wavelength. The spectral radiance of a material can be said to "have a lot of unique spectral character" if collected for narrow and sensitive spectral intervals. In some procedures, this is the key information being used for the mapping or identification of surface materials. Spectral character of physical surfaces may well be the same in the sun and the shadows, as the spectral reflectance of these materials is the same in the sun and the shade or the dark. Spectral character of vegetative materials may also be consistent under varying illumination conditions.

Apparent Spectral Reflectance. Many kinds of approaches are used to reduce the hyperspectral image cell values from apparent spectral radiance to the spectral reflectance of the surface cell. The new hyperspectral images which result are said to have cell values that represent apparent spectral reflectance. That is, each calibrated image band shows the relative variation in reflectance from cell to cell for that wavelength band. The more accurately these processes are in yielding cell values approaching actual spectral reflectance, the more accurately surface material can be analyzed using spectral reflectance curve libraries. This in turn enables using multiple images covering larger areas, collected on different dates and atmospheric conditions, and so on.

Spectral Reflectance as Physical Property of a Surface. It is easy to grasp that the spectral reflectance of rocks and minerals is primarily a physical property. Bare soils, concrete, asphalt, paint, and other man-made objects' surfaces may also be characterized as possessing fairly time-constant physical spectral reflectance curves. If you shade or wet a geological sample, it will still possess approximately the same spectral reflectance curve. The spectral reflectance of such materials is usually the same at night as in the day. A few things, such as algae, weathering, and so on alter the spectral reflectance of these materials in a natural setting.

Spectral Reflectance as Biophysical Property of a Surface. The spectral reflectance of a vegetative surface is a highly variable property. It can vary due to physical properties of the plant materials and the underlying surface. But it can also vary as a function of time due to the many biological processes at work. Loss of tissue water, alteration of chlorophyll, changing foliar displays (plant morphology), pigment types and concentrations, and many other factors all combine together to control the in-situ spectral reflectance of vegetative surfaces. Clearly it is much harder to build and use spectral reflectance libraries of different plant materials and controlling conditions. However, with all these variables, it is still possible to map plant materials and conditions using hyperspectral analysis.

Spectrum, Spectra, and Spectral Curves. Spectrum is the English singular, whereas spectra is the plural. So a single spectral radiance scan of a material would be called a spectrum as would the spectral reflectance resulting from its ratio to a solar spectral irradiance reference surface. However, for almost all hyperspectral activities, a single curve is not significant, as its statistical variability is not defined for the mathematical methods employed. As a result, almost every time reference is made to the spectral radiance or spectral reflectance of a material, it is the average, mean, variance, … of several individual sample curves. As a result, the term spectrum is not commonly used, and spectra is commonly used to indicate this statistically defined surface property of a material or materials. The term spectral curve is often used interchangeably with spectra.

Free to all!

Current Situation. At present, hyperspectral image collection activities and analysis (with the exception of some practical applications in mineral exploration) are research. MicroImages, our few commercial competitors, government, universities, and business laboratories are all experimenting around demonstrating a few potential applications of these concepts in almost all disciplines. It will take the next 10 to 20 years to move this research into mainstream use by our societies. As a result of this and several unanticipated turns of events, the hyperspectral analysis capabilities for TNTmips will all be free to anyone who wishes to use them via TNTlite 5.9.

It should be clear to everyone in or entering this activity with anyone's software products that we are all just moving from the realm of early innovators into the era of early adopters in this whole area of promising technological developments. But, in the last year, the few early adopters have been spurred onward by the announcement of the pending availability of sample images from commercial airborne devices; more portable, cheaper, and better spectroradiometers; and commercial and public satellite images such as Lewis and HYMAP. It is also clear that many problems remain to be resolved before these images and their analysis move into widespread, general use.

Background. Why even add this process into a commercial product in a manner which ends up free to all? The TNTlite concept and policy created two years ago by MicroImages did not anticipate its effect on, or even the addition of hyperspectral image analysis. The limits in TNTlite were designed to be primarily project area oriented and not spectral band oriented. It would require adding more limitations into TNTlite to restrict its use of hyperspectral imagery, and these would make it impossible to experiment with via TNTlite. After careful review of the situation, it was determined that applying further limitations in an experimental concept would not serve your interests or those of MicroImages.

Area Limited Concept. The largest single technological problem in using hyperspectral images is the huge amount of data represented by a single hyperspectral image. A review of all the available and pending unclassified aircraft and satellite devices (see above partial list) shows that they will all collect images of a small number of cells, most in the general area of 512 by 512 pixels. The state of the art in on-board storage media, downlink bandwidth, Internet bandwidth, distribution media, available computing power for analysis, and other factors all combine to limit the spatial area of such images. These limits have to be traded off against the pixel size of the images to permit the collection of 128 to 256 spectral bands. It is unlikely that quantum changes in these interacting limitations are about to occur. As a result, for the next few years, the pixel count of such images will be small, and any immediate significant increase will be accompanied by a decrease in the number of spectral bands.

Meaningless TNTlite Limits. TNTlite 5.8 was limited in spatial extent of the rasters involved to the product of 640 by 480 pixels. This does not really pose a limit on using the most complicated hyperspectral analysis techniques that can be incorporated. However, TNTlite 5.8 also had a seldom noticed limit of eight on the number of raster objects which could be used in any process. This raster count limit has been totally removed in TNTlite 5.9. It is also anticipated that the raster spatial limits might be raised slightly and as necessary in future releases of TNTlite. These limits will need to be increased by no more than 10% to accommodate all the hyperspectral image sizes of pending satellite devices (see above partial list) should they be successfully launched.

MicroImages also concluded that retaining or adding limits to TNTlite to control its use with all hyperspectral images would be more of a nuisance than a serious impediment to most users of the TNTlite product. Limiting access to this process would mean altering the basic philosophy of why TNTlite is being made available, such as "you can have some of the features but not all of them". Merely raising the limit on the number of raster objects to a value large enough to permit experimentation with hyperspectral analysis by students would be easily circumvented. Just use the results of some other commercial software or separate spectral dimensional analysis software to select only those spectral bands of most interest. Limiting the spatial size of the images to a few percent less than the total size of these hyperspectral images would simply be a nuisance.

Final Management Decisions. So, with all of this as relevant background, why has MicroImages chosen to make all this complicated hyperspectral analysis software available for totally uncontrolled use via these modifications to TNTlite, rather than impose some other limitations? First of all, this is a complicated and research oriented activity which we all have to work at to make simpler. We all need to work together as much as possible to understand how to make use of these important future developments. Everyone--students, geologists, biologists, ecologists, and the engineers and physical scientists building the devices--need to have at least one place they can find the analysis tools needed and contribute back ideas, algorithm concepts, designs to improve and speed the use of these methods. Let's use the analogy of the early and free distribution of Mosaic for network access. By its wide use and improvements based on listening to feedback, standard, easy public access to the Internet evolved into Netscape and Explorer. Certainly, MicroImages wants and must sell its professional products to stay in business, define geospatial analysis, and stay at the forefront of it. However, this area with such limited current and pending image availability needs a lot of nurturing around the world. We hope all its users via TNTlite and the professional TNT products will agree and contribute many useful ideas for future improvements.

Import Procedures.

V5.90 will import NASA AVIRIS aircraft images into raster objects from the 1994 band-interleaved-by-line format and from the 1996-97 (and probably 1998) band-interleaved-by-pixel-format. The AVIRIS sensor and flight program has been around for more than 10 years. Thus, there may be other formats of AVIRIS imagery from other years and flight programs. As you encounter other formats for AVIRIS, forward a sample image to MicroImages on CDR disk, and its import will be supported. AVIRIS images come with several other files containing auxiliary information. These include files with the extensions of:

*.avhdr general information about the flight line

*.brz browse image of the complete flight line (imported by V5.90+)

*.geo geometric calibration data

*.log log information of the distribution processing

*.eng engineering data

*.nav navigation data

*.rfl reflectance inverted AVIRIS image data

*.readme

Information from these additional files is being placed in the Project File as needed. Note, the import of AVIRIS is now being modified post-V5.90 to import the *.brz browse image of the complete flight line into a separate Project File. This will enable you to quickly inspect the coverage of the flight line to determine which full frames you wish to import.

As outlined, other different governmental and commercial aircraft hyperspectral devices exist with varying formats, wavelength ranges, spectral band intervals, and so on. Please note that many of these devices are experimental, and thus the format of their images can change frequently at the whim of their operators. As a result, it will be a continuing struggle to import all of the hyperspectral image sources. If you can supply a sample image and format documentation, their importation will be supported. As MODIS, EO-1, Orbview4, and other satellite hyperspectral imagery becomes available, these formats will be added.

Recording Wavelength Ranges.

Most of the images you have been using within TNTmips are of low spectral dimensionality and come in formats which do not record their wavelength ranges. Thus, the Project File has not required or automatically recorded spectral information for each raster object it contains. TNTmips has required that you keep track of the spectral band information for your multispectral images where needed, usually in the description of the object. Often it is not even required, as you simply remember by import order which of the 7 LANDSAT TM bands is green, red, photoinfrared, and so on. Furthermore, the supervised or unsupervised classification processes in TNTmips do not require wavelength information. But, this is still a nuisance which someday will be eliminated. However, it is non-trivial to deal with this historical problem in the myriad of places where the specific image is selected.

It is impossible to deal with hyperspectral images in this individual fashion in TNTmips. Manually recording the wavelength and then later selecting each simply would not work. Several hyperspectral calibration and analysis procedures also require information about the wavelength of each band. These include operations such as matching spectral curve libraries, spectral browsing, and image classification.

A new subobject has been added to the Project File to store wavelength information for a hyperspectral collection of raster objects. The import process for AVIRIS images creates this subobject. When the new hypercube object (see below) is available, this spectral reference and related information will be integrated directly into it. As MicroImages develops import processes for other sources of hyperspectral imagery, these wavelength ranges and related information will be automatically created for them in current raster objects and the hypercube object.

The process has a simple interface for assigning this wavelength information to the hyperspectral images you import by any other means into individual raster objects (for example, by using generic raster import, the GGR format, or via SML). Make sure to import or create these raster objects with the wavelength information inserted into the text description field (for example, via SML). If so, use the "Auto Define" provided in this utility. It will scan all raster object text description fields and search for a sub-string that contains the word "wavelength" in it. Then it reads the number next to it. When decoded, these spectral parameters are stored in the subobject.

Viewing Histograms.

This hyperspectral process provides a standard histogram visualization window similar to that in the mosaic process for quick inspection of each spectral band(s). This spectral browsing tool is available from within the view window for extracting spectra from the hyperspectral images. All necessary calibrations are done on the fly, and the user gets already normalized data in the range of 0.0 - 1.0.

Sample AVIRIS Imagery.

AVIRIS images are collected from 65,000 feet with a U-2 aircraft which NASA refers to as the ER-1 (Earth Resource). All AVIRIS imagery can be ordered on 8 mm tape for a processing charge of $250. Most of this charge is to pay a NASA contractor to find the images and copy them. While these images are few and far between, they are of high quality, and there are few collected for various interesting localities around the world. Unfortunately, AVIRIS images are not very well indexed, as this is an experimental program. Recently MicroImages has been able to access (unreliably) a web site at NASA/AMES at http://makalu.jpl.nasa.gov/locator which provides U.S. flight lines flown since 1992, overlaid upon regional maps. However, images collected before 1992 and some important post-1992 imagery is not indexed.

Since the ER-1 is wide ranging, NASA has conducted AVIRIS flight programs for experimenters in other nations. For example, a flight program was conducted several years ago in Australia in cooperation with CSIRO. It is not clear at this point how to find out about AVIRIS images collected on such foreign missions and where they were held.

Several sample hyperspectral images are posted on the NASA/AMES web site in spectral radiance and spectral reflectance form for free downloading. MicroImages has downloaded a set of four consecutive sample images which were flown north to south in 1997 over the Cuprite, Nevada gold mine region. Over the past 20 years, this particular area of exposed hydrothermal deposits has been a "public" test site for many geologically oriented remote sensing studies. Numerous articles can be found in the literature about the application of remote sensing methods to mapping this test area. As a result, AVIRIS imagery has been collected of this area every year.

MicroImages has imported the four Cuprite, Nevada hyperspectral images into RVC project files and selected one for distribution as a sample or test image in both original apparent spectral radiance and apparent spectral reflectance forms. It will also be used as the basis for the sample image used in the Getting Started tutorial. Each of these project files are approximately 140 MB in size. These individually prepared CDs could not be distributed with V5.90. If you need these sample hyperspectral images, they will be shipped by mail upon request without charge. Eventually, they will be distributed to all clients on CD in the improved and compressed hypercube raster object described below.

Spectral Curve Libraries.

MicroImages will act as a clearinghouse for information concerning the availability of any public, private, or "for sale" libraries of spectral curves. This information may be distributed via a MEMO, a location on microimages.com, and a link to other web locations where libraries are located. Please provide information on the location of additional libraries as you encounter them. Provisions have already been made in the Project File structure to store these spectral curve reference libraries as well as those libraries you develop from imagery or from spectroradiometers or spectrophotometers. It will also be necessary to provide import capabilities for each library requiring a description of the format of the curves. Functions will also be also be added to SML so that you can create scripts to import confidential or classified libraries which cannot be provided to MicroImages or others to import.

Last minute references about other possibly useful spectral curve libraries are provided below.

JPL Spectral Library. The JPL Spectral Library contains the spectral reflectance of 160 minerals in digital form over the wavelength interval of .4 to 2.5 µm. These measurements were collected in the laboratory with a spectrophotometer and prepared samples. http://asterweb.jpl.nasa.gov/speclib/

Topographic Engineering Command/U.S. Army Library. Spectral reflectance of vegetation, soils, rocks, and man-made materials over the wavelength interval of .4 to 2.5µm. http://curly.tec.army.mil/sst/hypersig.html

Caltech Spectral Library. Provides several libraries of spectral reflectance curves for public use covering plants common to the Mojave Desert (for example, creosote, bursage, and alfalfa). Various instrumentation was employed including: Analytic Spectral Devices Personal Spectrometer 2 (.33 –1.06 µm), GER-IRIS spectrometer (.34 – 2.5µm), Beckman UV5240 spectrophotometer, and a PIMA laboratory spectrometer. terrill@mars1.gps.caltech.edu

Rangeland Spectral Library. This older library of approximately 2000 spectral radiance and spectral reflectance curves of .35 to .8 µm was collected by this author and graduate students from 1969 to 1972. These curves are for the grasses, larger plants, and soils which make up the shortgrass prairie and rangelands located in the central United States. Typical common names of the plants would be blue grama grass, buffalo grass, rabbit brush, sage, broomsnake weed, … These spectral curves were measured in-situ with a custom designed field spectrophotometer (see reference thesis below).

These curves were collected together with extensive ground control measurements as a scientific database to discover the concepts exploited today in the many remote sensing biomass and related vegetation indices. Almost all of these curves represent ground plots of .25 square meter from which the vegetation was subsequently removed for measurement of its wet and dry biomass, chlorophyll and other pigment concentrations, proportions of green and dead biomass, …

This library is referenced here only as an indication of the magnitude of effort that goes into the collection of vegetation oriented spectral reflectance libraries and appropriate control measurements. Trying to use this library today has several complications. 1) The spectral range of these curves is too limited for today's hyperspectral images of at least .4 to 2.6 µm. 2) The original digital storage media (computer tapes) decayed to the point that they were unreadable. However, a printed copy of each curve and the digital tabular values still exist. This material was borrowed and copied about a year ago by Dr. Compton J. Tucker (see thesis below) at NASA/GSFC who wanted to have a contractor reduce it again to digital curve form. The current status of this effort is unknown (contact Compton J. Tucker, NASA/GSFC, Code 923, Greenbelt, MD).

Design of a Field Spectrophotometer Lab. by Robert L. Pearson and Lee D. Miller. 1971. Science Series No. 2. Department of Watershed Science, Colorado State University, Ft. Collins, Colorado. 102 pages. (available as M.S. thesis of Dr. Robert L. Pearson from University Microfilms)

Remote Estimation of a Grassland Canopy/Its Biomass, Chlorophyll, Leaf Water, and Underlying Soil Spectra. M.S. Thesis of Dr. Compton J. Tucker. August 1973. Colorado State University, Fort Collins, CO. 212 pages. (available from University Microfilms)

Remote Multispectral Sensing of Biomass. Ph.D. Thesis of Dr. Robert L. Pearson. May 1973. Colorado State University, Fort Collins, CO. 180 pages. (available from University Microfilms)

Spectral Estimation of Grass Canopy Vegetation Status. Ph.D. Thesis of Dr. Compton J. Tucker. September 1975. Colorado State University, Fort Collins, CO. 106 pages. (available from University Microfilms)

Other. Someone just posted this on the network. "You can try this URL

http://www.lut.fi/ltkk/tit/research/color/lutcs_readme.html. It's not really 'natural' materials, but it's really full of reflectance spectra."

Hyperspectral Cube Object.

Why do we need it? You just think you have had a lot of image storage requirements until you begin to work with hyperspectral objects. Hard drives will continue to expand, microcomputers will get faster, memory will expand, and the larger DVD write-once drives are coming soon. But, the size and number of new hyperspectral images will outstrip these incremental gains. A single AVIRIS image is 16 bits per pixel, 224 spectral bands, 512 columns, and 614 lines. This comes out to be 140 megabytes. Furthermore, the entire purpose and objective of using a hyperspectral image can be defeated by employing lossy compression methods. Only lossless compression should be used. Also, processing of hyperspectral images emphasizes the spectral dimension rather than the two spatial dimensions. If a hyperspectral image storage format is optimized for display by favoring optimal spatial organization, it will be inefficient for spectral analysis. If a format is optimized for analysis by keeping all spectral values of a pixel together, it will be inefficient for simply displaying gray scale or color renditions.

Small Image Footprints. Most hyperspectral images which will soon become available from aircraft or satellite devices will range up to about 150 megabytes (see lists above). This does not seem large by hard drive and TNTmips standards of today. But, remember that these images cover very small ground patches and have large image cells. Practical projects are going to require a collection of several of these images. For example, a single AVIRIS flight path might collect 10 such images and still only cover six by 60 miles at a coarse resolution. Obviously, one will tend to accumulate a large number of these images, and processing them from CD-ROMs using any currently used format would not be fast.

Big Image Footprints. Taking this one step further, MODIS, an expensive and important component of the EOS program, will produce nominally low cost or free hyperspectral imagery of potentially global extent at one to two day intervals. Assuming a successful launch, rough calculations show that a single frame of this imagery of 2300 by 2300 km, 36 spectral bands, 250 meter resolution, and 16 bits per pixel yields six gigabytes per image. TNTmips can effectively handle this size image on a spatial basis, but storage format improvements employing lossless compression are needed to handle the spectral aspects.

Current Formats. The RVC project file is the most efficient geospatial analysis format devised. A Project File can store all kinds of diverse geodata in a single convenient file. Over the years, MicroImages has specialized in advanced raster storage methods which have been copied by others. Combining pyramiding, tiling, and compression of rasters was an early innovation in the TNT products which provided for smaller rasters of lossless or lossy character and very fast access to very large images of any size. Without these spatial dimensional reduction features, many of the huge mosaics and other projects being prepared in TNTmips would not be practical.

Images stored in the RVC structure are heavily optimized for the most common use: retrieving and displaying one to three images using the minimal amount of memory required by TNTmips. Sophisticated tiling, pyramiding, lossy and lossless compression, optimal buffering, and other tricks make it possible for TNTmips to display, mosaic, transform, and accomplish many common tasks for huge images and maps all using real memory of 16 MB. However, these methods are all oriented toward optimal management of the spatial (X-Y) dimension of images. They are not optimal for working with hyperspectral images which all have small spatial dimensions and high spectral dimensions of 256 and potentially larger (increasing as imaging devices move beyond 2.6 µm).

Hyperspectral analysis procedures need, all at once, all the data values making up the spectral curve for a pixel to operate efficiently for computational and comparative purposes. Formats popularized for multispectral and hyperspectral image analysis by competing products use line-interleaved-by-spectral-band (line 1 is band 1, line 2 is band 2, …) or pixel-interleaved-by-spectral-band (band 1, band 2, band 3, … values for each pixel). While these formats are effective for hyperspectral analysis, they are ineffective for simple display of one to three images unless considerable real memory is available.

Implementation Plans. MicroImages has designed on paper a new storage structure for V6.00 for use with hyperspectral images. This new raster object will be called the hyperspectral cube object or a "hypercube object" for short. It will store all of a hyperspectral image in a single raster object, employing lossless compression. It has no relationship to any of MicroImages' or competitors' current raster storage structures. It has been optimized for the cube-like hyperspectral images to achieve significant lossless compression, to support rapid analysis, and as the basis for the display of several planned hyperspectral image graphical analysis and inspection tools.

Graphical Displays.

The Obvious. Competitive commercial hyperspectral analysis products contain some attractive and innovative tools for the direct viewing of several unique characteristics of hyperspectral images. These kinds of graphical tools are particularly useful for those just getting used to the idea of what hyperspectral images are and how they are different from normal images. They include such simple tools as hypercube displays and complex tools such as a rotating "n" space display which provides the basis for visual, qualitative hyperspectral analysis. Using the pending hypercube object as a basis, similar tools of these types will be constructed within TNTmips.

Feature Mapping. MicroImages also has its own ideas of graphical methods which support hyperspectral analysis. Many of you are familiar with the unique Feature Mapping process which has evolved into TNTmips over the past dozen years. This user friendly approach to mapping individual materials has still not appeared in competing systems. It was initially created to provide a push-button-like, interactive approach to mapping one or several materials from video, airslides, and other poor quality images. It has been used extensively to map and assemble useful information from many thousands of color video frames for water surfaces (small ponds) based on their unique spectral return.

Many hyperspectral image analysis applications can differ totally in objective from those you are familiar with using on multispectral and multitemporal images of low spectral dimensions. Most applications of automated classification of these low spectral dimension images have been for "wall-to-wall" mapping. That is, to assign every cell in the area of interest into its most probable category with the categories determined in advance (supervised classification) or after analysis (unsupervised). Some of you have learned to use unsupervised classification to prepare images with large numbers of categories (256 or larger) and then use the unique approach of feature mapping to assist in relating the classes to one or two unique materials.

Hyperspectral analysis methods (for example, Matched Filtering, see below) can be used to more accurately map a single important, but sparse material. Any other surface material or condition in the image is considered simply as noise to be rejected in some fashion by the analysis. This yields quite a different, and for the moment, complex approach compared to your familiar multispectral image applications. It is MicroImages' intention, after the hypercube object is in place, to embed one or more of these hyperspectral analysis methods "underneath" the feature mapping interactive interface with only a few user controls. It is proposed that this combination of:

• a hypercube object for fast analysis and access

• sparse material hyperspectral mapping concepts

• feature mapping for training, interaction, and management

• overlaying known information in feature mapping (geologic map unit vectors, viewing a normal color image, …)

will provide a more easily used, push-button-like approach to mapping

• a geologic surface material

• an emerging condition in vegetation

• a chemical spill, …

Getting Started Booklet.

Certainly a Getting Started booklet on using hyperspectral analysis is needed and is being worked on now. Watch the Getting Started booklet depository on microimages.com for its first appearance. All that this kind of short booklet can do is get you started in the right direction. It can cover only a fraction of the complexity in using this process. In its first issue, it will stress how to use the process' features but cannot teach you how to complete a successful project using hyperspectral images. Of course, MicroImages will continue to create and recommend more reference materials on this process. However, it will also be necessary for you to do considerable independent technical reading on the subject to get up to speed beyond the mere operation of the software.

Spectral Dimensional Reduction Techniques.

Another area of research and development in hyperspectral image processing has been the concept of spectral dimensional reduction. Image downlink bandwidth, on-board and final image storage, processing time, and other similar engineering limits would be greatly reduced along with cost if only the specific spectral bands of most value could be collected. There have been various approaches to this over the past 20 years. Many multispectral imagers (especially airborne scanners) have been built to collect a limited number of selected spectral bands of narrow, selectable wavelength ranges. In a few applications, the analysis of spectral curves, hyperspectral imagery, and finally testing of selected spectral intervals, has lead to spectral imagers with specific purposes. The SeaWIFFs satellite imager is an example of such a device which was designed to collect several narrow spectral bands for a specific range of purposes, but where spatial resolution was not important and could be traded off against wide area coverage.

Spectral dimensional reduction techniques that can determine the subset of narrow spectral interval images will yield results approximating those obtained using all available spectral bands to a desired accuracy at a fraction of the cost. For example, 30 AVIRIS spectral bands might map the surface geologic materials in the Cuprite, Nevada AVIRIS image to produce most of the accuracy which would result if all 224 spectral bands were used. An example of a new dimensional analysis scheme which can be implemented by MicroImages is an image on the cover of Photogrammetric Engineering and Remote Sensing for June 1998 produced by the Minimum Noise Fraction Transform (MNF).

Please provide any technical papers or concepts from other disciplines for spectral dimensional reduction schemes. This is an important concept to incorporate into future TNTmips versions. Your papers and work with ever widening spectral ranges in libraries and hyperspectral imagery will yield other application specific and cheaper systems based on selected, fewer spectral bands.

Flight Lines Versus Images.

Most of the available hyperspectral images, including AVIRIS, are of the "push broom" or scanning type. They collect and record all the spectral radiance values for a single image line perpendicular to the forward motion of the device. As a result, any number of lines can be collected of the fixed number of pixels. However, due to limitations in media and downlinking and for convenience in recording, delivering, indexing, and so on, a predetermined number of lines are "chopped" out to make an image.

AVIRIS and its progeny actually collect a longer flight line of "images" which can be analyzed together all at once. That is to say, the images along a single AVIRIS flight line and from flight lines of similar devices are one larger image collected in a short period of time. However, due to the large size of a hyperspectral image, it is convenient for the time being, even in TNTmips, to manage these sequential images as separate Project Files. However, each such image in a continuous flight line can be subjected to identical analysis and the results assembled into a single map.

You must determine if several images of an area make up a flight line or otherwise can be analyzed as one using the methods to be introduced in the sections below for analyzing single images. This depends on the design of the hyperspectral sensor and the processing method used. For example, library test spectral radiance or spectral reflectance curves can be assembled from one image with known training features, a dark field from another, and a flat field from a third, if all the images are in five sequential frames from a single AVIRIS flight line.

Using Spectral Curves.

A definitive reference on the factors which affect the spectral reflectance properties of minerals is about to be published as Chapter 1 in the Manual of Remote Sensing. This chapter is Spectroscopy of Rocks and Minerals, and Principles of Spectroscopy by Roger N. Clark, USGS, Denver. While the publication is now over a year late, a copy of this entire chapter can be downloaded from http://speclab.cr.usgs.gov. It is important to note that he ends this chapter with some good advice: "A word of caution with spectral libraries, and spectra obtained from other sources in general: wavelength errors are common except from data obtained on interferometers. This author and colleagues at the USGS have evaluated many spectrometers and other spectral libraries and have found many to have significant wavelength shifts. One mineral with a stable absorption feature is a well-crystallized kaolinite, which has a sharp absorption at 2.2086 +/- 0.0003 µm and is commonly found in visible and near-IR libraries." The indication here is to use this absorption band to check any libraries found and to check spectrometers.

Secretive Nature! Why can't you or MicroImages find more spectral libraries? Spectral reflectance curves can be measured in-situ with spectroradiometers or borrowed from good libraries of others as the basis of hyperspectral analyses. But, only a few public libraries of spectral curves are available (JPL, USGS, and Johns Hopkins). Collecting representative, in-situ spectral libraries is an expensive task, and those which are assembled are proprietary, as they give their owners competitive advantage in both the collection (by using predictive and spectral dimensional reduction methodologies) and analysis of hyperspectral images.

Geological Libraries. Spectral reflectance libraries are commonly collected for geologic, soils, and man-made materials where it is easy to identify and characterize the materials involved, since their spectral reflectance is a physical property of their surface. For example, simply recording the name of the material might constitute all that is needed for the future use of its spectral reflectance in a library.

Vegetation Libraries. It is equally easy to collect the spectral reflectance of a biological surface. However, it is far more difficult and expensive to record the properties of that surface. It means nothing to build up a spectral library of curves for vegetative surfaces as a function of their names (corn, soybeans, grass, oak tree, …) regardless of how closely and carefully the species might be identified.

The spectral reflectance of vegetative surfaces is a biophysical property and is controlled by the one-time circumstance of that surface (plant biomass, pigment concentrations, relative dead and live materials, soil exposure, leaf transparency, leaf turgor, foliar display, and many other factors). Collecting a meaningful set of in-situ control information about vegetative surfaces takes a great deal of time and effort. To date, most tedious studies of this type have been undertaken only to determine how and what might be measured about vegetation from ground based and image hyperspectral strategies.

Image-Based Libraries. Another strategy is to build transient spectral libraries directly from the hyperspectral imagery to be analyzed. This has the advantage that either apparent spectral radiance or apparent spectral reflectance curves can be used directly from the imagery if it can be assumed to be collected all at one time under uniform spectral irradiance conditions. This approach has the disadvantage that the spectral library becomes time-of-image-collection dependent. It is also often difficult to obtain the necessary knowledge of the in-situ ground conditions which the spectra represent. However, successful hyperspectral analysis schemes are built upon this general approach. In fact, some methods require only a few sample image-based spectral library curves of a specific material to proceed.

Home-Grown Libraries. Building and managing your own spectral libraries (collected in-situ, from images, assembly from other libraries, or other methods) requires a significant collection of spectra management tools which have been built into the TNTmips hyperspectral process. Single spectral curves are not usually used in libraries and analyses, and a number of representative curves must be combined to provide the needed statistical variability of the material represented. Libraries of spectra may vary in properties from each other and from the available hyperspectral images in total wavelength range, spectral bandwidth, reference material, and other factors. Ground-based in-situ measurements of spectral curves may cover different wavelength ranges and wavelength intervals from one spectral radiometer to the next, and all differ from the available hyperspectral images. Each current hyperspectral imaging device differs from all the others in many of these properties, may have variable wavelength range and band setting, or be altered in design from time to time. While these may sound like daunting complications, they can be overcome with time, patience, and the tools in TNTmips.

USGS Library. The USGS spectral library has about 500 statistically represented spectral reflectance curves of mostly geological materials. This library is provided in a TNT reference file and is an integral part of this TNT process. This library was collected under ideal conditions in laboratory devices. It is provided as part of V5.90 as sample data for your practice. It can be applied as part of some of the simpler analyses of the sample Cuprite, Nevada AVIRIS imagery and others.

When using the USGS library, you can view the list of the short names of minerals, search by name, and display single or multiple spectral curves in the standard plotting window. TNTmips allows you to edit a spectra's name and description, save it under a different name, delete it or save it as a plain text file for easy exporting.

Other Libraries. MicroImages will provide methods for importing other public spectral libraries which can then be distributed as part of TNTmips. Simply identify where they are so that we can find them on the Internet or by other means. Methods to import private spectral libraries can also be provided by direct means or by way of the SML scripts.

Combining. TNTmips can combine spectral reflectance or spectral radiance curves from any source. Simply choose two input spectral curves from the list and choose the mathematical operation to perform. The new spectra which results will be saved as a new spectral curve in the currently open library. The operations available to combine spectra include: average, resample, divide, subtract, add, maximum, minimum, and difference.

Editing. TNTmips provides a simple spreadsheet-like editor that displays selected spectral curves in four columns in tabular form: "Wavelength", "Value", "Bandwidth" and "Error". Use this tabular form to edit values individually or use "group" processing. "Group" processing applies an editing operation to a range of values that fall into a selected wavelength range. Once you select the spectral range, all editing operations will be automatically applied to all curve values that belong in that range.

The editing operations provided are: set a value, add, subtract, multiply, interpolate, smooth, normalize, compute continuum, remove continuum, and compute derivative. When the edited results are satisfactory, the new spectral curve can be saved into the spectral library. A "Restore" function is also provided to restore the original spectral curve from the library.

Analyzing. Another useful operation is spectral curve matching. With it you can search for spectra in the library that match the currently selected spectra. This will list the 20 best matches in the scrolling list along with a parameter that indicates their spectral proximity. The interpretation of this spectral proximity parameter depends on the matching method selected. There are two spectral matching methods available: Spectral Angle Mapper (discussed below in connection with image analysis) and Band Mapping (called Spectral Feature Fitting by some). After matching, you can then click on a spectral curve in the "best match" list to display its spectral curve in the plot window.

 

IMPORTANT: Again, it should be emphasized that these curve management and analysis procedures can be applied to spectral curves and libraries you build up from your image(s).

 

Calibration Methods.

Hyperspectral images have many, many transformations imposed upon the effective spectral radiance for each pixel in the digitally recorded images. The amount and nature of these vary from sensor to sensor. A number of calibration techniques have been devised to mitigate their effects. TNTmips provides several of these procedures.

The spectral irradiance from the sun to the scene and the resulting spectral radiance reflected to the hyperspectral imaging device pass through an atmosphere which selectively absorbs radiance as a function of wavelength. The spectral distribution and amount of this absorption is a function of the current level of several unmeasured constituents in that atmospheric path such as water vapor, carbon dioxide, and so on. TNTmips 5.9 provides several popular methods to correct for atmospheric absorption by removing its radiance attenuation effects. The methods provided are: Equal Area Normalization, Flat Field Calibration, and Maximum Value.

TNTmips applies this calibration on the fly, so you do not have to recompute and store different versions of the input hyperspectral image.

Additive Offset Calibration (AOC). AOC calibration corrects for instrument artifacts and atmospheric backscatter that will raise the recorded value of effective spectral radiance for all cells. For example, the atmosphere between the ground and the hyperspectral imaging device will scatter radiation into the sensor. Without correction, this would make an area of zero reflectance at a particular wavelength become non-zero in the effective radiance recorded for that pixel. Every pixel, regardless of ground spectral reflectance, will have something added to its effective radiance from this and other instrument sources.

Additive Offset Calibration can be by Minimum Value or "Dark Field Calibration" that is done using a region in the image which you specify. The "Dark Field" method involves locating and outlining a dark area on the image (for example, water bodies) and extracting an average spectrum for this area. This is done automatically when you use the "Dark Field" calibration tool. You simply select and draw around the dark area. This average "Dark Field" spectrum will be subtracted from each pixel of the image during the subsequent processing.

Flat Field Correction (FFC). The FFC method uses the actual effective radiance measured from a polygonal area which you outline in the image. "Flat Field" indicates that you should select a land cover that has a relatively flat, uniform surface spectral reflectance at all wavelengths of interest. The average effective spectral radiance computed for this area will be used to divide into the effective spectral radiance of each hyperspectral pixel.

Obviously there is no such Flat Field spectral reflectance material in nature, but this is just another of those approximations which must be dealt with in hyperspectral image analysis. Do not choose vegetative materials or water for this purpose. Use man-made materials of complex composition such as asphalt and concrete, canvas, and natural materials such as sandy soils if they may occur in the scene, as they will exhibit the least spectral "character".

Log Residuals (LR). This is a calibration technique that uses a multiplicative observation model that consists of "topographic" and "illumination" factors. The algorithm applies a number of transformations to the input measured radiance in order to remove the influence of these factors. The result is a new set of spectra that has an advantage that it is the same as would be obtained by applying the same process to the unknown reflectance spectra.

Analysis of Aircraft Spectrometer Data with Logarithmic Residuals. A.A. Green and M.D. Craig. unidentified paper provided by client, source and date unknown.

Equal Area Normalization (EAN). This calibration technique is also known as the Internal Average Relative Reflectance (IARR) method. It removes the solar irradiance drop off, atmospheric absorption, scattering effects, and instrument noise from the raw hyperspectral data. EAN (alias IARR) normalizes the data by scaling the sum of the input pixel values in each spectral band for each pixel to a constant value. This shifts all spectral radiances to the same relative brightness. This method requires no knowledge of the surface materials because it uses an average effective spectral radiance calculated directly from the input data. The effective spectral radiance of each pixel in the image is divided by the average effective spectral radiance. The average spectral radiance is also thought to contain solar irradiance. The procedure removes the majority of the atmospheric effects, except in cases when the area has wide variations in elevation, or atmospheric conditions are not uniform across the image.

Calibration of Airborne Visible/Infrared Imaging Spectrometer Data (AVIRIS) to Reflectance and Mineral Mapping in Hydrothermal Alteration Zones: An example from "Cuprite Mining District". by Freek van der Meer. Geocarto International (3) 1994. p.23-37.

Hyperspectral Mapping.

TNTmips 5.9 has several algorithms for hyperspectral image classification and processing: Spectral Angle Mapper, Cross Correlation, Matched Filtering, and Linear Unmixing. Both Matched Filtering and Linear Unmixing algorithms now use Singular Value Decomposition, known to be effective for dealing with solving the systems of linear equations when dealing with hyperspectral data that is highly correlated at each available wavelength.

Spectral Angle Mapper (SAM). This spectral classification method maps features by comparing a reference spectra to those of each pixel in the hyperspectral image. This method assumes that the hyperspectral images have been reduced to "apparent reflectance" with all dark current and path radiance biases removed. SAM calculates the "angle" between image and reference spectra, treating them as vectors in N-dimensional space, where N is the number of spectral bands in the image. This measure of similarity is insensitive to instrument and other gain factors, because the angle between two vectors is invariant with respect to their lengths (in other words, their intensity = value in that wavelength interval). This allows correct comparison between laboratory spectra and image spectra that have an unknown gain factor related to topographic illumination and other effects. This procedure outputs two rasters: a classification raster and "spectral angle" raster that contains values of spectral angle for each classified pixel. A pixel is classified as a certain end-member material if specified angle between it and end-members is less than a user-specified threshold.

Cross Correlation. This is a simple curve matching technique that computes correlation coefficient between end-member spectra and pixel spectra. A pixel is classified as an end-member material if the correlation coefficient is above the specified threshold value. The process outputs two rasters: a classification raster and a "correlation" raster that contains correlation coefficients for each pixel.

Linear Spectral Unmixing (LSU). This LSU procedure determines the relative abundances of materials depicted in hyperspectral images based on the spectral characteristics of the materials. The reflectance of each pixel is assumed to be a linear combination of the reflectances of each material you select to form an end-member set. The output from the algorithm is a set of spectral end-member abundance images.

TNTmips provides a constrained version of Linear Spectral Unmixing where the sum of all abundances for each pixel is equal to 1. Abundances may assume negative values. Spectral unmixing results are quite sensitive to how well you can locate and select training sets to represent the end-member materials sought (for example, areas of complete crop cover and of bare soil). Samples can be used that are located in other hyperspectral images, but they must be of close time, same date, same sensor, and so on--everything that is required to insure that the materials will have the same spectral curves in both images.

The number of end-members used in the linear unmixing must be at least the number of spectral bands minus 1 (because of a unity result constraint).

A1 * M1 + A2 * M2 + A3 * M3 + ... + An * Mn = Pw

Sum of Ai = 1.0 (this is the unity result constraint)

Where: Ai is the spectral abundance of the material i

Mi is the spectral reflectance of the material i at given wavelength w

Pw is the spectral reflectance of the pixel at wavelength w

Linear Unmixing of Ill-conditioned Spectra. N. Pendock and S.S. Stan. Proceedings of Twelfth International Conference and Workshops on Applied Geologic Remote Sensing. Volume II, pp. 393-398. Denver, Colorado, 17-19 November 1997.

Matched Filtering (MF). MF is an alternate technique for computing relative abundance of a material in a hyperspectral image based on its known spectral characteristics. Unlike Linear Spectral Unmixing, it does not require a complete set of end-members to perform a useful spectral unmixing. Matched Filtering maximizes the response of a desired spectral signature while suppressing the response of the undesired background signatures. This technique produces best results when applied to low probability materials within the scene (for example, a particular surface mineral). It is an effective method to use to "go hunting" for a particular material or condition in a hyperspectral image.

This procedure is based on a constrained energy minimization (CEM) technique. The basic difference between Linear Spectral Unmixing and Matched Filtering is that LSU tries to find linear combination of known pre-selected end-members at each pixel simultaneously, but MF computes a solution for one end-member at the time without any reference to other materials which might be present in the image. MF operates in two stages: first a spectral correlation matrix is computed and CEM operator is constructed as:

W = (R * d) / (d * R * d)

Where: R is the spectral correlation matrix (numbands x numbands)

d is the spectral signature of the interest (1 x numbands)

W is the CEM operator (1 x numbands)

The CEM operator is applied to the hyperspectral image in the second step and relative abundance image is computed as:

A = W * P

Where: A is the abundance of the pixel

W is the CEM operator (1 x numbands)

P is the spectral profile of the pixel (1 x numbands)

The CEM operator produces a value of 1 for the pixels that have a spectral profile equal to a signature of the interest. This does not actually occur in real world situations with complex circumstances and where every pixel is a mix of materials (for example, vegetation, soils, and geological materials). Thus, the actual number is less than 1 and can be thought of as the relative abundance of the "sought" material in each cell. It is possible that an abundance of .5 in some cells in the image is very significant when all the other cells produce significantly lower values for the material being tested. Thresholding this abundance value at .4 might produce a rough map of a particular soil surface underlying a vegetated surface. Or, if only a few cells have a higher abundance value, then small areas of the particular geological material sought may be thresholded.

In many cases, the spectral correlation matrix is ill-conditioned because of the high correlation between spectral bands, and this causes numerical problems when finding its inverse. To overcome this problem, MicroImages has implemented and employed a new matrix library using the concepts of Singular Value Decomposition to find a pseudoinverse of the correlation matrix and insure numerical stability of the MF procedure.

 

Notice: There are several other processes in TNTmips which may also become unstable and produce totally meaningless results or no results at all if you enter incorrect or inaccurate data. For example, incorrect solutions are produced by the use of grossly inaccurate ground control points in the production of a DEM from stereo images or in a mosaic. Almost always these incorrect results are "trapped" by the TNT process, and you are advised of the need for more accurate data. However, the subsequent use of these new Singular Value Decomposition functions in such processes will assist in improving, or at least better management of such situations (for example, they will report degraded results rather than none at all).

 

Use of a Modified Constrained Energy Minimization Technique to Map Ferruginous Sediments Along the Alamosa River, Colorado. W.H. Farrand and J.C. Harsanyi. unidentified paper provided by client, source and date unknown. V11, pp. 385-392.

Comparison of Products.

The following is a comparison of the analysis procedures available within current versions of competing products to the best of MicroImages' current knowledge. ERDAS is not included, as they have provided only a skeletal hyperspectral analysis procedure within Imagine 8.3. ERMapper has not distributed any commercial product in this area. Should you have further information to update or correct this table, please supply it.

Spectral Curve Analysis.

TNTmips 5.9

PCI 6.0

ENVI 3.0

Remove Continuum (RC)

Yes

?

Yes

Spectral Feature Fitting (SFF)

Just for curves

No

Yes

 

Calibrations.

TNTmips 5.9

PCI 6.0

ENVI 3.0

Equal Area Normalization (EAN)

Yes

No

No

Log Residuals (LR)

Yes

No

No

Additive Offset Calibration (AOC)

Yes

Yes

Yes?

Flat Field Correction (FFC)

Yes

Yes

Yes?

 

Image Analysis Procedures.

TNTmips 5.9

PCI 6.0

ENVI 3.0

Spectral Angle Mapper (SAM)

Yes

Yes

Yes

Cross Correlation (CC)

Yes

No

No

Linear Spectral Unmixing (LSU)

Yes

Yes

Yes

Matched Filtering (MF)

Yes

No

Yes

Vector Quantification Filtering (VQF)

Yes (V5.90+)

No

No

 

From these tables you can see that TNTmips immediately rates high in the technical features provided in this new prototype process. However, as has been pointed out earlier, there are still additional features identified as needed for TNTmips 6.0: special compressed hypercube object, more import formats, and a Getting Started tutorial booklet. ENVI also has some excellent graphical hyperspectral inspection tools such as a hypercube display and rotating "n" space clustering. It is the purpose of the TNT compressed, hypercube object to provide the basis for the development of these kinds of graphical inspection tools, significantly reduced object size, and improved analysis speed.

Hyperspectral analysis is a complicated concept incorporating a number of complex ideas. But, 10 years ago, supervised and unsupervised image processing in general, and in DOS MIPS in particular, were complicated ideas and hard to use on slow computers with limited interactive functionality. Now they are widely available techniques used via many excellent products, some of which are free. It is reasonable to expect that as the availability of hyperspectral imagery increases, simpler to use software will evolve to interpret it.

Technical Information Needed.

The objective of this new prototype process is to analyze spectra and/or extract information from hyperspectral images. MicroImages is clearly not a research organization and is primarily devoted to the implementation, simplification, and popularization of the ideas and results of others. MicroImages is seeking new ides for additional hyperspectral analysis methods. We need your help to continue to improve the set of basic statistical, mathematical, and procedural hyperspectral analysis tools. We need input of any kind on this topic: papers, ideas from professional and TNTlite clients, interested parties, researchers, and anyone else. These materials will be used as the basis for adding new analysis procedures to this process. If you do not find a useful procedure which you know about incorporated into this hyperspectral process, it is probably because you did not provide information about it.

Military Situation.

It is clear that U.S. military organizations have been working with hyperspectral sensing and analysis for many years. The applications of this technology in such a setting are obvious. Some of the results of the military technological support are about to become available to the public through the U.S. Air Force and U.S. Navy's support of experimental public hyperspectral sensors on pending satellites (see above). This parallels their active support and plans to purchase high resolution panchromatic and multispectral images from other satellites for their public and publicity use, even though they have their own more advanced, but highly classified, devices. Certainly they already have much better classified aircraft and spacecraft hyperspectral devices.

Insight into their activity and what might be expected soon in the public sector can be gleaned from unclassified magazine press releases such as the following:

"Researchers at the Washington-based Naval Research Laboratory have successfully demonstrated autonomous, real-time in-flight hyperspectral detection of airborne targets and military ground targets. Selection was followed by cueing of a high resolution imager and target designation with pointing optics and a pulsed laser. The work, conducted as part of NRL's 'Dark Horse' program, demonstrated potential capabilities needed for planned autonomous Uninhabited Combat Air Vehicle (UCAV) operations, according to Thomas Giallorenzi, head of NRL's Optical Science Div. The daytime flight tests, conducted on board a P-3 aircraft, used two NRL-developed hyperspectral detection algorithms operating simultaneously in an 'and' mode to reduce false alarms. They produce a cueing signal upon target detection. Real-time imagery from the high-resolution reconnaissance framing camera was transmitted via data link. The system also was able to autonomously detect and cue on moving target aircraft flying 4,000 ft. below the P-3. Future tests will expand detection into the infrared bands for day/night operations."

From Industry Outlook in Aviation Week and Space Technology. 1 June 1998, p. 13.

From this clipping we learn that the military has advanced aircraft hyperspectral imaging devices operating in a close range, high resolution mode. Their imaging device is framing = camera like, rather than push broom. This implies much higher data collection rates than push broom scanners and more controllable image geometry. They downlink hyperspectral imagery in real time. They process hyperspectral imagery in real time. Single material, search-type signature matching analysis methods can be combined together to increase the accuracy of detection or correct identification in a complex and even moving background. Hopefully some of these engineering advances and the large investments they have made in analysis procedures will become available soon for your use.

Modifications Since V5.90 CDs.

From all the above introductory material, you now realize that hyperspectral analysis has promise but is complicated in nature. If you are going to start using this prototype process this quarter, you must plan on frequent downloading of new versions. As you and others around the world begin to work with the process, we will be modifying and correcting it based upon your findings and ours. Several post V5.90 modifications have already been made and new versions posted on microimages.com.

Selecting Wavelength Ranges. It is now possible to select a sub-spectral interval by wavelength for the analysis processes. When you enter the wavelength range, these spectral bands will be selected from the RVC file and used in all the processing you specify, including the comparisons and operations with the spectral libraries. Other systems require that you manipulate your original hyperspectral datasets into other smaller datasets as well as other extra steps when a partial wavelength range is being used. The next step in this direction is being implemented which will allow you to select a number of wavelength intervals for analysis, thus eliminating the atmospheric absorption bands if desired (or work with them only). It will also be possible to simply select the atmospheric bands from a list by name and eliminate or select them.

VQ Filtering. A promising new analysis method from MicroImages called Vector Quantization Filtering has been added.

AVIRIS in TNTlite. The limits on TNTlite raster objects have been lifted from the product of 640 by 480 (307,200 cells) to the product of 614 by 512 (314,368 cells) to accommodate full AVIRIS images.

AVIRIS Browse Images. AVIRIS 97 images each come with a file containing a browse image of four bands of all the images in the flight line. This browse image can now be imported into a separate RVC image for quick browsing.

* Object Editor.

Directly Edit E00, Coverage, and Shapefiles.

External objects can now be loaded into the object editor for maintenance and updating. ESRI's E00, Coverage, and Shapefile formats are supported. Use the File/Open External menu selection to open external objects. Saving the external object will automatically do a "Save As" operation into the original format. The restrictions of the format involved are maintained by the object editor. For example, if the external object does not support Z values and point elements, then the editor will disable Z value assignment and adding point elements. These restrictions can be removed if a conversion is done to a vector object in the Project File through "Layer/Properties…". This conversion is automatic if editing in TNTlite, as saving to the original format (in other words, exporting) is not allowed.

Unlike the TNT products, Coverage and E00 formats limit the number of vertices a single line element can contain. To overcome these shortcomings, if you are editing files in these ESRI formats, the object editor will impose these limits on the number of vertices each single line can contain. You can also create a new vector object in a Project File for subsequent export to, or "Save As" an E00 or Coverage file. As a result, you will have to make sure that the number of vertices for the lines you are creating in TNTmips stays within this limit. To set this limit, use New Object Values dialog on the Element ID Values pop down panel and in the Layer/Properties... dialog.

Effects of New Topology.

The object editor has been modified to generate and maintain the selected vector object topology type discussed earlier. In the New Object Values dialog, there is an option menu to select the topology type to assign to the new vector object. There is also a method available to change the topology type within the editor. Use Layer/Properties... on the main window toolbar to change the topology type. If such a change would cause a loss of information, such as converting a polygonal vector object to a planar vector object, a warning will pop up to inform you that this action will cause a loss of polygon information.

3D vector objects can now be created, loaded, and maintained by the object editor. Topology projected onto the XY plane is maintained for 3D vector objects, depending on the topology type previously discussed. For a polygonal vector object, polygons will be maintained in 2D space even though the coordinates are in 3D. 3D vector objects created before V5.90 automatically become no-topology vector objects.

If the 3D vector object is a network vector object, nodes can coexist in the XY plane if they have different elevations. If lines overlap in the XY plane but have different elevations for either polygonal or planar vector topology, they are considered intersecting lines, and a node will be generated at that intersection point. The ability to change the dimensionality of the vector object from 2D to 3D or the reverse is available under Layer/Properties... on the main editor menu. If the change is from 3D to 2D, a dialog will pop up to verify your request since this kind of conversion will result in a loss of information.

The line editor tool can now manipulate the entire line for the planar and network vector objects. This includes moving the start and end vertices and manipulating the line such that it crosses other lines. You cannot perform this operation for a polygonal vector object, since manipulating a line in this fashion in this topology would cause unwanted and unpredictable results in polygon database attachments. If you want to have the full line editing capability and the polygons do not have attributes attached, or you do not care about them, changing the vector topology type layer under Layer/Properties... to planar will eliminate the restriction.

 

Default Records.

Default records can now be assigned when adding elements using the tabular view of one or more database tables. Using the Add Element tool with Vector, CAD, or TIN objects, one or more check boxes can be selected to state that this record is now to be attached by default when adding this element. To enable this feature, press the "Enable Tabular View Default Record" toggle button under Setup/Preferences/Other. The records can only be selected from a table with an attachment type of "No Restrictions" or "One Record Per Element". If a default record is already being assigned through the Attributes icons under Default Record, this tabular view default record feature will be disabled until you turn on the Enable Tabular View Default Record toggle button.

Vector Filtering.

Filtering options for lines, polygons, and nodes have been added into the object editor for direct use, as well as by way of a separate process. These filters are accessible on the Vector Tools dialog for the current editable vector object. A color plate is attached entitled Vector Filters to illustrate the interface and general operation of this revised procedure. The filters are Remove Dangling Lines, Remove Sliver Polygons, Resolve Undershoots, Line Simplification, Remove Excess Nodes, Dissolve Polygons, Line Densification, and Remove Bubble Polygons. Each filter, except Remove Excess Nodes, brings up a dialog to set its specific parameters. After the filtering is complete, a Message dialog will come up showing the summary results of the filter's application. This individual filter summary report can be disabled under Setup/Preferences/Vector.

Editing Z Values.

While adding elements to 3D vector objects, the Z values from a surface reference layer can be used to set the Z coordinates in the current Add Element tool. Select the Z Coordinates from Surface Layer toggle button for automatic Z value transfer to the element in the current add element tool. For the Add Line, Add Polygon, and Add Point tools, the Z value will show up in the Manual Entry section of the tool and will change when the tool graphic is changed. This feature is useful for editing and maintaining 3D vector or TIN data that was created from a reference DEM in a 2D to 3D conversion process. This toggle will not show up if the vector object being edited is not a true 3D vector object. The toggle is always present for TIN objects.

Automatic Label Generation.

Automatic label generation for vector objects can now optionally perform label optimization for generated labels using the same method employed by the CartoScript function introduced in V5.80. These positions can be saved as your permanent label positions. Do this if using this same automatic label generation over and over in the spatial display process is too slow or you are finished with your editing. Due to its relatively slow speed, the label optimization algorithm is not performed in the preview mode of Auto Label Generation. The optimization parameters are under the Optimize tab in the Auto Label Generation dialog.

Breaklines in TINs.

Breaklines can now be intersected from a 2D or 3D vector object into a TIN object. Line elements can also be drawn or traced as breaklines from a reference layer into TIN objects in the object editor. All the line segments they contain are inserted as permanent, fixed edges of new triangles in the TIN.

If the breaklines are from a 2D vector object or edited into the TIN without Z values, the Z value of each vertex will be interpolated from its 3D position on the plane of the triangle in the TIN. The use of breaklines to control surface modeling is discussed below in that section.

Miscellaneous.

The Set Contour Z Values tool can now display assigned contours in two colors or multiple colors based on a "major interval" setting. For the Multiple setting, the major interval setting determines how many different colors can be used. For the Major Interval setting, the major interval lines are one color, and all of the other intermediate assigned lines can be a different color.

There is an option under "Setup/Preferences/View" to turn off spatial DataTips for all editable layers. The default is set to turn off DataTips for editable layers.

Modifications since V5.90 CDs.

Profile Editor. A new Profile Edit window has been added. Select a line in a 3D vector object (for example, drainage or geophysical survey line), and it can be shown and edited in profile. In this edit window, the Z values of vertices in the line can be changed, the line can be splined, and other features are being added. Other kinds of line filters will be added as needed.

Drawing Tools. Constraints for horizontal and right angle drawing movements in Stretch mode have been added to the line editor. Hold down the "shift" key to continue the line drawing at a right angle and the "ctrl" key for only horizontal or vertical drawing.

A toggle button has been added for turning on/off the line editor start (square) and end (circle) markers (default is on).

Vector Filtering.

The vector filter process has been rewritten to reorganize it and make it more usable for multiple filter and object processing. It will now allow the selection of multiple filters to apply to each object. These filters include Remove Dangling Lines, Remove Sliver Polygons, Resolve Undershoots, Line Simplification, Remove Excess Nodes, Dissolve Polygons, Line Densification, and Remove Bubble Polygons. More than one filter can be selected, and a filter can be applied more than once to the list of vector objects. The order of the filters in the list is the order the filters are applied. There are methods available to add, remove, change order, and modify parameters of each filter in the list. The Test view window will show the results of each filter on the active vector object in the test color assigned to the filter. A report is generated after the filters are applied summarizing the results of applying each filter on each vector object. A color plate is attached entitled Vector Filters to illustrate the interface and general operation of this process.

Databases Editor.

You can now enable the Edit menu for both singular and tabular views. The Cut, Copy, and Paste options in the Edit menu in the database editor now work.

* Network Analysis (new prototype process).

Introduction.

A new network tracing process has been added to TNTmips. This new process provides the fundamental route tracing procedures found in other general purpose GIS products such as ArcNet. A network vector object can be used to compute the optimal path or route between two points with a number of waypoints and route control parameters. A second procedure allocates all the area served by a point by tracing out all routes to it with the described parameters (for example, the area served by a retail location within a 15 minute legal drive). A color plate is attached entitled Network Analysis to illustrate the results of each of these procedures.

We hope that while routing is explained and initially thought of as an urban application, some of you will find it useful in other areas, such as in tracing particles backward up a stream network where the constraints on the line are stream gradients, Reynolds numbers, geologic stream bed materials, and so on. In other words, to determine where a sample of material most likely originated. This can become a complex geospatial analysis when raster and vector overlays are also used.

As introduced earlier, advancing the network routing features in the TNT products required the addition of a new type of network oriented vector topology and the ability to attach and maintain attributes for nodes. This is one area where your requests for expanded network analysis in the TNT products were not satisfied. However, now that this background development and the basic process and features are in place, new and more advanced network analysis features will be appearing in future versions of TNTmips. Alas, for a while these will not include dynamic segmentation, which would permit the indirect addressing of positions on lines such as house addresses, mile markers, and so on. Dynamic segmentation will also require changes in a number of fundamental procedures in several TNT processes.

Network Attributes.

Creating and using network analysis requires that you acquire or carefully create your network vector object. Often you will start with a general vector or imported CAD object and manipulate the node and line attributes to define underpasses, overpasses, direction of travel, allowed turns, average time for turns, speed limits, time of day speeds, no passing zones, stop lights, average time of stop at light, and so on.

This process allows you to create tables with basic network control standard attributes. At present the network line table has

• separate enable/disable states in both directions of the line

• impedance values for both directions of the line

• demand for the whole line (this would be something like the number of bus students living along a line)

Network node attributes can represent barriers or a turn impedance matrix. You can add additional parameters to nodes or lines using additional relational tables and the other tools in TNTmips. Please identify any additional route defining standard attributes needed.

Applications.

Routing. You define start, end, and a few middle nodes as desired stops on a route, and the process will compute an optimized route path for desired stops. You can control route line segment selection and weighting with a query. This route is displayed as a selected vector in the View window. The process also generates a text report about this route which you can optionally save to a text file.

Allocation. There are two types of allocation procedures available. Allocate In is used where timing and arrival at the center is sought (for example, getting to school). Allocate Out is used where distribution from the center to surrounding areas is sought (for example, for setting delivery territories). You select several nodes as centers, and the process will compute the allocation of lines to centers. These allocations are displayed in color in the View window. The process also generates a text report about this allocation which you can optionally save to a text file. You can also save the allocation as a new database table which links the allocation center to the lines it services.

Transfer Attributes.

When transferring vector attributes, you can now can specify the table join type if the destination database already has existing tables. The available table join types are If Same Table Name and Structure, If Same Table Structure, Do Not Join, and Remove Destination Tables.

You can now transfer attributes from raster objects to vector points.

Vector Validate.

The vector validate process under "Process/Vector/Validate..." now allows multiple vector objects to be selected and validated at one time.

The validate process now automatically reduces the dimensional category of the vector object to its lowest common denominator. If a vector object is listed as a 3D XYZ vector object with an X, Y, and Z coordinate for each vertex and all of the Z values are constant for each line element (as in a contour), the vector object would be reduced to a 3D XY object with Z value stored in the line element header. This translates into a smaller vector object and allows the object to be used in a process that can only handle 2D coordinates, like the object editor for vectors before V5.90. If all of the Z values are zero, the vector object is reduced to a 2D XY vector object. In any event, no information is lost in any coordinate system reduction.

GeoFormulas.

* GeoFormulas created, tested, and saved in the new process can immediately be used in SML scripts.

* Surface Modeling.

Minimum Curvature.

The minimum curvature surface fitting method now supports entering a surface tension parameter (ranges from 0.0 to 1.0).

Bidirectional Surface Fitting.

MicroImages has added components to V5.90 needed to process raw geophysical data lines. These include specialized surface fitting methods called Bidirectional, line-leveling methods (now in V5.90+), a profile editor (now in V5.90+), and a method for reduction to the pole for magnetometer surveys. Fortunately, most of these tools are generic and will be of use in other disciplines. A color plate is attached entitled Bidirectional Surface Fitting of Geophysical Transect Data to illustrate this application.

Air and ground geophysical surveys (aeromagnetic, gamma ray emission, …) are field strength measurements at the collecting instrument. As a result, you cannot directly collect images. Data profiles are collected in nearly parallel lines with very frequent data points along the flight line. These lines are flown in a selected direction so as to cross the subsurface structure of interest rather than parallel to it. To conserve aircraft time, the separation of the lines is largely relative to the interval between observations along the lines. This results in point vector objects which look like a lot of closely spaced dots in parallel lines.

Those of you in other professions will find that you also may have or may collect survey data in such a fashion. For example, bathymetry is often collected in such parallel ship transit lines. Ecological transect surveys can be parallel GPS controlled lines with frequent observations of multiple parameters at each stop. Laser profilometers have been used to measure parallel elevation transects. A soil salinity and conductivity trailer can be pulled through an orchard or field for such a survey. Perhaps you can think of other similar data collection operations where you cannot collect an image of a parameter but could make one using this parallel line approach.

Parallel lines of closely spaced points can add bias and artifacts into some surface fitting methods. Bidirectional surface fitting has been added and is used for this kind of data distribution. It works across the flight lines to create a surface with as much detail and character in this direction as possible. It is possible that after such input data is flown and converted to a raster, that the most important structures in the data were not generally perpendicular to the transect lines. Bidirectional surface fitting can also be run at any angle you select to enhance perpendicular structures. Bidirectional surface fitting also works very well for many conventional point data distribution sets where no particular directional bias is present and produces a surface with considerable detail if adequate input data points are provided.

Breaklines.

Breaklines in TINs represent 3D lines or polygons which can be considered as boundary conditions which must be met. These features will be preserved in the appropriate surface fitting operations. Breaklines can be introduced into your surface modeling operations whenever they can be introduced into your TIN at the appropriate geographic position. Breaklines can represent a 3D drainage network, the margin of a lake, the edge of an island, the edge of a proposed road cut or excavation, and so on. Any line or polygon which you wish to represent as a break (a discontinuity) or abrupt change in the surface can be represented as a breakline.

Vector object(s) containing your breaklines are selected using a new Breaklines tabbed panel. Via this panel you can choose to apply either or both of two application modes. 1) Apply Breaklines incorporates the vector lines within the TIN structure as fixed hard edges or breaklines. 2) Clip Areas is appropriate for a breakline vector containing closed polygons representing desired TIN boundaries. These polygons will clip the TIN object using Clip Inside or Clip Outside options. Clip Inside creates "holes" within the TIN and, in a topographic example, would be appropriate for breakline polygons representing lake shorelines. Clip Outside eliminates TIN edges outside the breakline polygons and would be appropriate for polygons representing island shorelines.

For example, breaklines can be 3D vector objects containing the drainage and ridgeline networks and the polygons representing lake shorelines. If you are surface fitting to create a DEM from digitizing a contour map, create a vector object containing digital lines for the drainages, ridgelines, and lakes. The DEM which results from using these breaklines in your surface fitting operation will show these sharp boundaries and contain their values. Another example would be to introduce the polygonal margin of islands (for example, Japan) as polygonal breaklines. The resulting surface will stay within the islands' margins, and the "ringing" of the surface at the boundary will be substantially reduced.

Future Work.

The first clients to beta test the bidirectional surface fitting method with geophysical data came back with a request which illustrates how other seemingly unrelated developments must preclude advances of TNTmips into new areas such as the geophysical data analysis and display area. In geophysical surveys, several parameters may be collected for each observation point. For example, if the survey is for gamma ray emissions, then typically four emission measurements are made: Thorium, Potassium, Uranium, and total. In many other transect surveys (for example, ecological, bathymetry, ...), it would also be convenient and economical to sample multiple parameters at each data point.

It is expensive to fly or otherwise create transects, so as many different kinds of observations are recorded as possible at each XYZ sample point. Obviously, when you have this kind of sample data, you would like to deal with them as XYZ position points with attribute records attached. This would allow a surface to be fit to any of the observed parameters or to a computed field designed to model combinations of the observations. For example, Thorium, Potassium, and Uranium emissions are often combined in various ways before a surface is produced. Queries can be used to thin or combine and manipulate the observations as input to surface fitting. For example, when plant species A is present and plant species B is not, then fit the surface to the observed level of nutrient C.

A good transect survey design will have much more widely spaced crosslines for calibration and validation. In a geophysical survey, these lines will be used for "line leveling" to compensate for drift in the survey observations. In a bathymetric survey, the crossing lines are used to check the tidal corrections at each crossing. But, these crossing lines pose a dilemma. Lines can cross and have intersections where two separate points have attributes attached which are for the same observations but may or may not match in Z. Furthermore, if the "connectivity" of these lines was important, this would be lost if they are imported as points only instead of lines. Importing them as lines also creates cumbersome polygons where none are needed.

V5.90 introduces new vector topological data structures to accommodate new kinds of geospatial applications. It is logical to import this kind of geodata into a network vector object. This object allows crossing lines which do not intersect (for bridges and tunnels in routing applications). Also introduced first in V5.90, nodes can have attributes attached and can be used to represent the observations. Nodes (observations) can also coexist in the same XY position in a network but do not have to occur at intersections. The line segments between them will retain the connectivity of the observations. Polygons are not formed. By such synergism, features developed principally for network routing applications immediately become useful in these new applications.

TNTmips can provide a means for analyzing geophysical, bathymetric, ecological, and other line or transect surveys. To do these tasks right requires a lot of additional pieces that are already available in a more complete geospatial system. These tasks are difficult in other software developed exclusively for narrow purposes. For example, magnetic survey profiles can easily be plotted on images in XY or in XYZ over images to determine if they cross a farm, ranch, powerline pylons, and so on, creating a spike due to the metals present. From this view, these profiles can be directly selected and edited in the new profile editor. Attaching survey data to XYZ positional points as attributes allows all the geospatial analysis and visualization tools to be applied. This synergism is what the planning and evolution of geospatial analysis is all about.

Raster Operations.

Elevation/Slope.

Surface slope can now be computed in percent as well as in degrees. Also, two methods of slope calculation are supported using four or eight neighboring cells for slope calculations.

Elevation/Cut Fill.

The elevation difference between the two rasters can be optionally computed and saved as a new raster object.

DataTips.

Raster cell values or the attributes in records attached to the raster cells can be viewed as the DataTips for each cell. For example, the RGB color coordinates can be viewed in a single DataTip when selected as the DataTip for the RGB layers viewed.

Correlation Histograms.

Correlation histograms can be displayed for raster objects of any data type. The axes of the histogram are now labeled, and the window can be resized. The histogram can be saved to a database table or text file.

Morphology.

Clumping. You can define and locate contiguous groups of cells by specifying the Min or Max number of cells of the selected value. The size of the clump can also be specified by an area. Holes in the clumps caused by surrounded values of cells of other values can also be omitted when searching for the clump.

Sieving. This process will remove raster features less than a size you specify. Be sure to apply Clumping before using this feature.

Progressive Transformation.

User interface is standardized--layer manager, pyramiding, setting of background color.

Merging CAD.

The ability to merge CAD objects was removed from the mosaic process in V5.80. Unfortunately, the fact that it was not available elsewhere was not brought to our attention until after the CDs were duplicated for V5.90. However, a replacement process has been implemented and can be downloaded from microimages.com and operated from Process/CAD/Merge.

Mosaic.

TNTmips now provides two different feathering methods for image overlap areas: linear and non-linear, both of which weight how the image cell values are changed in the overlap area as a function of their distance from the border. Feathering works with a whole image as well as with user-defined areas of it. You can select how far to feather each overlapping image in pixels measured from the edge of the raster into the overlap area. Many competing mosaic products restrict feathering distance to be relatively small. There are no restrictions on the maximum value of the feathering distance, so images can be mixed together very smoothly. TNTmips' feathering is capable of handling an unlimited number of images in the overlap areas without speed or quality penalty.

You can now mosaic multi-band images of more than three bands at one time. Crosshairs can be moved in sub-pixel increments.

In bundle adjustment and other "fitting" processes, a large family of linear equations is solved. In some instances, the number and positioning of your control points can cause these linear equations to be over or under determined or otherwise ill-defined. This is particularly difficult to handle when traditional methods of matrix inversion fail. To handle this situation, the mathematical function handling these computations in mosaic (and subsequently other processes such as surface modeling) is being changed to use Singular Value Decomposition (SVD) methods.

TNTsdk.

Those of you working with the TNTsdk on your own new processes or to extend SML with new functions may need frequent new function libraries. All the TNT libraries are now being posted weekly on Thursday for your access and upgrading.

MicroImages has paid the license fee for the transfer to you of the MOTIF libraries for the Windows and Macintosh platforms. These libraries are not posted on microimages.com, as this would violate our license agreements for them with the Open Software Foundation. However, the MOTIF libraries are not being periodically changed by MicroImages.

Stereoscopic Modeling/Restitution.

* The restitution process is now part of the integrated Digital Photogrammetric Modeling process. The previously separate process has been integrated with the main stereoscopic modeling program. This new restitution process now supports rectification of Vector and CAD objects as well as raster objects. A color plate is attached entitled SPOT Restitution from DEM to illustrate these options.

A SPOT image can be converted into an orthoimage using the restitution process. The DEM can cover all or a portion of the SPOT scene. The orientation and other satellite ephemeral parameters are read from the SPOT header. If no DEM is available, a pseudo restitution can be made by entering XYZ control points.

 

* SML.

Introduction.

GPS Input. A single GPS function is available in V5.90 to read coordinates and associated parameters for a GPS unit which communicates with NMEA protocol. The single function in V5.90 reads the designated serial port, handles the protocol, and organizes and serves up the coordinate position to the next step in your script. Modifications since V5.90 have added a suite of functions to provide more complex control over multiple GPS devices.

SML Versions. SML is being maintained as backward compatible. To the best of our knowledge, old SML scripts will run with each successive version of the TNT products. If you encounter situations where this does not appear to be the case, please contact MicroImages so that adjustments can be made. Equivalent, more advanced functions may also be added from time to time in close parallel to existing functions. However, the older simpler versions of functions will be retained to permit operation of existing, older scripts which still use them.

New functions are continually being added and existing functions modified to the extent that they have new optional parameters and flags added. To assist you and MicroImages in keeping track of the version, each function now displays two dates on the "Insert Function" dialog box. The earlier of the two dates is the date on which the function was created. The later date is the last date upon which the function was modified to add a new optional parameter or flag which you might like to take advantage of. This second or last date modified will not be changed if the function is only changed internally such as improving its speed of execution with revised buffering, better code, and other transparent changes. By watching this last date modified, you can spot functions you use which have had new optional features added.

Saving Scripts. V5.80 supported saving and using scripts as SML objects. This procedure will be clear when opening or saving a script. Project Files are convenient places to store your scripts which are closely associated with the geodata sets they are intended to work with. Objects containing SML scripts are very small relative to geodata, so do not hesitate to save the same script into each Project File which contains the geodata which it was designed to operate on. In this fashion, the correct SML script will move around with the geodata and always be available and correct. A color plate is attached entitled New Features for SML Developers to demonstrate this new operation.

Creating Icons. The icon creation and editing tool previously used only by MicroImages is now available for your use via SML. It was used to create every icon used in the TNT products. Now you can use it to create icons from 16 by 16 pixels to 256 by 256 pixels. All the icons in the TNT products are provided with it for your reuse with or without modification. A color plate is attached entitled SML Features (continued) to demonstrate this editor.

APPLIDATs. The concept of combining SML script objects with other geodata objects in a Project File has created a totally new kind of geospatial data distribution and analysis concept. A V5.90 Project File can contain SML objects for display and analysis which can be written to automatically act upon the geodata objects (or other data) it contains. Such a Project File is called an APPLIDAT. APPLIDAT stands for APPLIcation plus DATa. This concept and its possible uses by you are described in detail below in a new, separate section and on several color plates. It is so unique and different that it is best understood by trying a sample. As a result, you will find that a sample APPLIDAT has been installed on your desktop along with TNTmips, TNTedit, or TNTview and their equivalents in TNTlite. Simply click the APPLIDAT (SML) icon and follow the instructions it contains.

The sample APPLIDAT included with V5.90 contains several SML scripts. None of these scripts are encrypted, and their contents can be easily reviewed in SML along with the extensive comments they contain. You are completely free to begin to modify them for any purpose you see fit. A simple first exercise would be to convert them from an APPLIDAT to a more general TurnKey Product. This would require that you alter the Biomass or first script so that it presents an object selection dialog box to select an image and the associated elevation raster, rather than reading them from the same Project File. There are other simple sample scripts available that illustrate how to incorporate the file selection dialog into a script.

TNTatlas. A Project File can contain a HyperIndex structure and an SML structure. Thus a script, a TurnKey Product, or an APPLIDAT can put the user into a TNTatlas geodata set. The user of the script uses the atlas to navigate to the layer(s) of interest. The script can then allow them to select it for whatever use is desired in the balance of the script.

Using a TNTatlas in an SML script can be very powerful. It permits a script to easily provide navigation and access to a large prepared collection of geodata. At present, only the whole object which is selected is passed into the following section of your script. Better integration of this last minute idea will be added after V5.90, so that calling the script gets information about the zoom, view location, and other settings in the current atlas page selected for subsequent use in the script. For example, you may want your script to continue on with the specific area and zoom of the image located, not just its identity and location.

Encryption. You now have the option, when saving your SML script into a Project File, to encrypt it to conceal and protect its content and logic. You can select to encrypt for any key, a given key, and/or require the user to enter a password. An attached color plate entitled New SML Features (continued) illustrates this procedure. By using encryption, you can now protect both your proprietary algorithms and associated sensitive geodata. For example, APPLIDATs with encrypted scripts and encrypted geodata can only be used for the display and analysis purposes for which they were designed. This scheme has several levels of protection and can be carried to the extreme that only the person with the proper TNT key and password can open the APPLIDAT, and then it contains only the geodata appropriate for their site and use.

Documentation.

Using On-Line. All SML functions now have current on-line descriptions. Many have sample scripts associated. The description of the function and its parameters and a sample script (when available) can be displayed using the "Details" button displayed after each function is selected from its scrolling list. SML is expanding rapidly, and this kind of information is difficult to keep current in many different places. Thus, all other sources of function descriptions have been deleted. You will no longer find function descriptions in the appendix of the on-line Reference Manual, in an out-of-date Application Note (now discontinued), in the Getting Started booklet on Using the Spatial Manipulation Language, or in this MEMO (only a list of the names of added functions). You will find attached a color plate entitled New Features for SML Developers to demonstrate some of the features of this new on-line documentation.

Because it is expanding rapidly, many of you creating SML scripts frequently download the process from microimages.com. The on-line descriptions are the only way that you can also get the reference information on each function that you require. Please continue to keep up with these developments and progress by downloading the process periodically.

 

IMPORTANT: Larger sample scripts created by MicroImages and other clients are not imbedded within the SML process or distributed on the TNT product CD. To review or obtain all the newest sample SML scripts, please use the SML script exchange on microimages.com.

 

Printing List. You can now save the on-line descriptions and sample uses of functions in your current version of SML. This is done using the Help menu choice when you are in the SML process. Save Class References saves a simple list of all the current classes and descriptions. Save Function List saves a list of all current functions and their one line descriptions. Save Function Reference saves the complete detailed on-line documentation of all the functions. Any of these files can then be loaded into your favorite editor. They can also be printed as a basis for planning a new script, for quick off-line reference, to identify new functions, and so on. Always use these on-line descriptions, as they are the most current for any version of SML downloaded from microimages.com.

New Functions.

Introduction. The rapid expansion of this geospatial programming language (SML) continues with the addition of 237 new functions in V5.90, bringing the total number of functions to 582. Information can be passed between scripts using the *.INI (for example, Preferences Files).

You can now create and manage dialog boxes to control your SML applications and collect user input. Complex views can be displayed using all the layer types in a view; create a pin map layer; display DataTips, ToolTips, and the new HelpTips; show database tables and edit them; add or delete icons from the menu bar; and even create 3D perspective views. Database tables can be created and managed and fields added or deleted from them. Vector objects can now be created and managed, and initial tools can be used with them such as buffer zones and create standard properties table (3D coordinates are also maintained). Objects can be converted from one type to another such as vector to raster, raster to vector (autobounds and line tracing), vector to regions, and so on. Regions can now be created, logically combined, and applied to define areas for further processing. More functions have been requested and added for drawing and managing text, basic files, script exits, and other related tools.

New. These new toolkits or groups of functions are now available:

• colormap management

• dialog box creation

• database table creation and management (including inserting new fields)

• vectors object creation and management (including 3D coordinates)

• vector toolkit (for example, buffer zones, standard attributes, …)

• object type conversion (for example, CAD boundaries, line tracing, vector to region, ...)

• GPS port functions

Expanded. These function toolkits have a significant number of new functions:

• more display and view control functions (for example, pin mapping, layer management, DataTips, ToolTips, HelpTips…)

• many more drawing functions (for example, text and associated font control)

• many more region functions (for example, copy, point in region, combinations)

• more math and file management.

Display Functions.

CloseViewHistogram

Closes a histogram.

CreateViewHistogram

Pops up a histogram of a Raster with an optional Region.

DBEditorCloseTable

Closes a table opened via DBEditor.

DBEditorOpenSingleRecordView

Opens single record view of a table.

DBEditorOpenTabularView

Opens a tabular view form of a table.

GroupCreateLayerManagerForm

Creates layer manager as a form (not a dialog).

GroupOpenLayerManagerWindow

Creates "layer manager" dialog.

GroupQuickAddDBPinmap

Adds a database pinmap to a group.

LayerOpenControls

Opens layer controls.

PinmapLayerGetFieldInfo

Gets field information for a pin map layer.

PinmapLayerOpenDatabase

Opens a pinmap layer database.

UpdateViewHistogram

Forces update of histogram to current region.

ViewRedrawIfNeeded

Redraws a view (but only if it has changed since last redraw).

ViewSaveSnapshot

Saves a snapshot of a view.

View Functions. (for putting a view window in a dialog and managing it)

CreateToolTip

Adds a ToolTip to a drawing area.

DestroyToolTip

Destroys ToolTip.

DispAddCallback

Registers function to call when an action happens on a view.

DispCreate2DGroup

Creates a 2D group in a display.

DispGetRasterFromLayer

Gets the raster used by a given layer.

DispGetVectorFromLayer

Gets the vector used by a given layer.

DispSetStatusBar

Sets the status bar on a standalone display window.

DispStatusBarClear

Clears the status bar.

GroupAddRaster

Adds a raster to a group.

GroupAddRasterVar

Adds a raster to a group given an SML raster variable.

GroupCreate3DView

Creates a 3D view of a group.

GroupCreateView

Creates a 2D view of a group.

GroupDestroy

Destroys a group.

GroupGetLayerByName

Gets a layer pointer given the layer name.

GroupQuickAddCAD

Quick adds a CAD layer (by prompt).

GroupQuickAddRaster

Quick adds a raster layer (by prompt).

GroupQuickAddRasterVar

Quick adds a raster layer given by SML variable.

GroupQuickAddTIN

Quick adds a TIN layer (by prompt).

GroupQuickAddVector

Quick adds a vector layer (by prompt).

GroupQuickAddVectorVar

Quick adds a vector layer given SML variable.

GroupRemoveAllLayers

Removes all layers from a group.

LayerDestroy

Destroys a layer.

LayerHide

Hides a layer.

LayerShow

Shows a layer.

LayoutCreateView

Creates a view for a layout.

LayoutGetGroupByName

Gets a group pointer given group name.

ToolAddCallback

Registers function to call for tool events.

View3DReadPosIni

Reads 3D view setting from ini file.

View3DWritePosIni

Writes 3D view setting to ini file.

ViewActivateTool

Activates a given tool.

ViewAddStandardTools

Adds "standard" tools to a view.

ViewAddToolIcons

Creates icons for tools added.

ViewCreate3DViewPosTool

Adds 3D view position tool icon to view.

ViewCreateExamineRasterTool

Adds Examine Raster tool icon to view.

ViewCreateHperIndexTool

Adds HyperIndex tool icon to view.

ViewCreateMeasureTool

Adds Measurement tool icon to view.

ViewCreateMultiPolygonTool

Adds multiple polygon drawing tool icon to view.

ViewCreatePolygonTool

Adds polygon drawing tool icon to view.

ViewCreateRectangleTool

Adds rectangle tool icon to view.

ViewCreateSketchTool

Adds sketch tool icon to view.

ViewDestroy

Destroys a view.

ViewGetMapScale

Gets the current map scale to a view.

ViewGetTransLayerToScreen

Gets transformation from layer to screen.

ViewGetTransLayerToView

Gets transformation from layer to view.

ViewGetTransViewToScreen

Gets transformation from view to screen.

ViewOpen3DControls

Opens the 3D controls.

ViewOpenLayerControls

Opens the layer controls.

ViewRedraw

Redraws view.

ViewRedrawFull

Redraws view (full extent).

ViewSetMapScale

Sets the mapscale for a view (for next redraw).

ViewSetMessage

Sets message in status line at bottom of view.

ViewSetStatusBar

Sets status bar at bottom of view.

ViewStatusBarClear

Clears status bar at bottom of view.

ViewTransPointLayerToView

Translates a point from layer coordinates to view coordinates.

ViewTransPointViewToLayer

Translates a point from a view to coordinate layer coordinates.

ViewZoom1X

Sets view to 1X zoom and redraw.

ViewZoomIn

Zooms in on view.

ViewZoomOut

Zooms out on view.

Colormap Management Functions.

ColorMapFromRastVar

Reads colormap from under raster.

ColorMapGetColor

Gets a color structure from the colormap.

ColorMapSetColor

Sets a colormap color given a color structure.

ColorMapSetColorHIS

Sets a colormap color to given HIS values.

ColorMapSetColorRGB

Sets a colormap color to given RGB values.

 

ColorMapWriteToRastVar

Writes a colormap under a raster.

Drawing Functions.

ActivateGC

Does subsequent drawing with a given graphics context.

CreateGCForDrawingArea

Creates a graphics context.

DestroyGC

Destroys a graphics context.

DrawTextSetColors

Sets the colors for text drawing.

DrawTextSetFont

Sets font for text drawing.

DrawTextSetHeightPixels

Sets text height.

DrawTextSimple

Draws a text string.

DrawUseStyleObject

Changes the style object used for subsequent calls to SetStyle() functions.

SetColor

Sets color by color structure.

Widget Functions.

CreateForm

Creates a form.

CreateIconButtonRow

Creates an icon button row.

CreateRowColumn

Creates a row/column form.

CreateScrolledWindow

Creates a form with scroll bars.

CreateToolTip

Adds a ToolTip to a drawing area.

DestroyToolTip

Destroys ToolTip.

DialogFullScreen

Sets dialog to full screen mode.

DialogToBottom

Moves dialog to bottom of visible windows.

DialogToTop

Moves dialog to top of visible windows.

StatusContextCreate

Creates a status context from a status handle.

StatusContextDestroy

Destroys a status context.

StatusSetBar

Sets the value of a status bar.

StatusSetMessage

Sets the text message for a status bar.

Dialog Creation Functions.

AlignWidgets

Makes labels line up.

CreateButtonRow

Creates a row of push-buttons.

CreateDrawingArea

Creates a drawing area.

CreateFormDialog

Creates a form dialog to put widgets in.

CreateFrame

Creates a frame around widgets.

CreateHorizontalSeparator

Creates a horizontal line on a dialog.

CreateIconLabel

Adds an icon to a dialog.

CreateIconPushButton

Adds an icon push-button to a dialog.

CreateIconToggleButton

Adds an icon toggle button to a dialog.

CreateLabel

Creates a label on a dialog.

CreateModalFormDialog

Creates a modal form dialog to put widgets in.

CreatePromptNum

Creates a prompt for numeric value.

CreatePromptStr

Creates a prompt for string value.

CreatePushButton

Creates a (text) push-button.

CreatePushButtonItem

Creates a button item.

CreateToggleButton

Creates a (text) toggle button.

CreateToggleButtonItem

Creates a toggle button item.

 

CreateVerticalSeparator

Creates a vertical line on dialog.

DestroyWidget

Destroys a widget.

DialogClose

Closes a dialog.

DialogOpen

Opens a dialog.

DialogWaitForClose

Waits for user to close given modal dialog.

StatusDialogCreate

Creates a status dialog.

StatusDialogDestroy

Destroys a status dialog.

StatusSetDefaultHandle

Sets the "current" status line.

WidgetAddCallback

Registers function to call when an action happens on a widget.

Database Functions:

Allow use of variables for table names. You can do stuff like...

table$ = "MyTable";

x = Vect.poly.(table$).Field

You can do the same with field names. This would allow the script to pick the name of the table it was going to process.

DatabaseGetTableInfo

Gets database table information.

FieldGetInfoByName

Returns a field class by name.

FieldGetInfoByNumber

Returns a field class by number.

NumRecords

Returns number of records in a table.

OpenRasterDatabase

Returns a database class for functions that need one.

OpenVectorLineDatabase

Returns a database class for functions that need one.

OpenVectorPointDatabase

Returns a database class for functions that need one.

OpenVectorPolyDatabase

Returns a database class for functions that need one.

RecordDelete

Deletes one or more records.

TableAddFieldFloat

Adds a new field of type float.

TableAddFieldInteger

Adds a field of type integer.

TableAddFieldString

Adds a new field of type string.

TableCreate

Creates an empty table. Use TableAddField???( ) functions to add fields to it.

TableGetInfo

Returns a table class.

TableInsertFieldFloat

Adds a new field of type float but inserts it before field "before".

TableInsertFieldInteger

Adds a new field of type integer but inserts it before field "before".

TableInsertFieldString

Adds a new field of type string but inserts it before field "before".

TableLinkDBASE

Makes a link to a dBase file.

TableNewRecord

Adds new record to a database.

TableOpen

Opens a database table.

Vector Functions.

CloseVector

Closes an open vector object.

GetInputVectorList

Gets multiple vector objects.

GetVectorPolyAdjacentPolyList

Returns list of all polygons that share a common line with a given polygon in a vector object.

CreateTempVector

Creates a temporary vector object.

CreateVector

Creates a vector object without user dialog using same flags as GetOutputVector.

OpenVector

Opens a vector object without user dialog using same flags as GetOutputVector.

OpenInputVectorList

Opens multiple vector objects.

VectorToolkitInit

Initializes a vector object for use with vector toolkit functions.

GetInputVectorList

Gets multiple vector objects.

Vector Toolkit Functions.

VectorUpdateStdAttributes

Updates standard attributes.

VectorLineRayIntersection

Finds closest vector line in a given direction from a given point and return intersection.

VectorDeleteStdAttributes

Deletes standard attributes.

VectorDeleteDangleLines

Deletes dangling lines smaller than maximum length.

VectorSetZValue

Sets Z value for Vector element.

VectorUpdateStdAttributes

Forces update of standard attributes.

VectorToBufferZone

Creates buffer zone vector from selected vector elements of a given type, or all elements of a given type.

VectorElementInRegion

Tests a vector element against a region.

FindClosestLabel

Finds closest label to a point.

Conversion Functions.

BinaryRasterToRegion

Converts binary raster to a region.

ConvertCADToVect

Converts CAD object to a vector.

ConvertRegionToVect

Converts a region to a vector.

ConvertVectorPolyToRegion

Converts single vector polygon to a region.

ConvertVectorPolysToRegion

Converts multiple vector polygons to a region.

ConvertVectToRegion

Converts a vector to a region.

RasterToCADBound

Creates a CAD object by tracing boundaries.

RasterToCADLine

Creates a CAD object by tracing lines.

RasterToVectorBound

Creates a vector object by tracing boundaries.

RasterToVectorLine

Creates a vector object by tracing lines.

VectorElementToRaster

Converts vector element to a raster.

VectorToBufferZone

Creates buffer zone vector from selected vector elements.

Region Functions.

BinaryRasterToRegion

Converts binary raster to a region.

ConvertVectorPolyToRegion

Converts single polygon to a region.

ConvertVectorPolysToRegion

Converts multiple polygons to a region.

CopyRegion

Copies a region.

CreateRegion

Creates a region without using a dialog.

GetInputRegion

Opens an input region via a dialog.

GetOutputRegion

Opens an output region via a dialog.

OpenRegion

Opens a region object without using a dialog.

PointInRegion

Tests if a point is in a region.

RegionAND

Returns the intersection of two regions.

RegionOR

Returns the union of two regions.

RegionXOR

Returns the exclusive OR of two regions.

RegionSubtract

Returns the result of subtracting regions.

RegionTrans

Converts a region using a transparm.

SaveRegion

Saves a region object without using a dialog.

TIN Functions.

TINElementInRegion

Tests a TIN element against a region.

CAD Functions.

CADElementInRegion

Tests a CAD element against a region.

Georeference Functions.

GeorefGetParms

Opens dialog for selecting georeference subobject.

TransPoint2D

Transforms 2D point using transparm.

String Functions.

FileNameGetExt

Returns file extension portion of path qualified filename.

FileNameGetName

Returns filename portion of path qualified filename.

FileNameGetPath

Returns path portion of path qualified filename.

Math Functions.

Bound

Forces a value into given range.

GetUnitConvAngle

Gets scale factor for angle unit conversions.

GetUnitConvArea

Gets scale factor for area unit conversions.

GetUnitConvDist

Gets scale factor for distance unit conversions.

GetUnitConvVolume

Gets scale factor for volume unit conversions.

Raster Functions.

GetInputRasters

Gets more than one raster.

Console Functions.

CheckCancel

Forces the SML script to check the cancel button.

SetStatusBar

Displays a status bar at the bottom of the console window.

SetStatusMessage

Displays a message at the bottom of the console window.

GPSPort Functions.

GPSPortClose

Closes an open GPS port.

GPSPortOpen

Opens a GPS port.

GPSPortRead

Reads data from a GPS port.

 

Focal Functions.

FocalMedian

Returns median value of cells in focal area.

Tool Functions.

ViewCreatePointTool

Creates a point tool.

ViewCreateSelectTool

Creates a select tool on a view.

File Function.

DeleteFile

Deletes a file (takes filename).

IniReadNumber

Reads a number from the current ini file.

IniReadString

Reads a string from the current ini file.

IniWriteNumber

Writes a number to the current ini file.

IniWriteString

Writes a string to the current ini file.

RenameFile

Renames a file (takes old filename and new filename).

Exit Functions.

OnExit

Registers function to call just before script exits.

Exit

Exits the script (calls functions registered with OnExit).

WaitForExit

Suspends script but process callbacks and events until exit is called.

Object Functions.

CloseStyleObject

Closes an open style object.

GetInputFileName

Opens an input file via a prompt dialog.

GetOutputFileName

Opens an output file via dialog.

GetInputRasters

Gets more than one raster.

IniClose

Closes an ini file.

IniOpenFile

Opens an ini file.

IniOpenObject

Opens an RVC text object.

OpenStyleObject

Opens a style object.

OpenStyleSubObject

Opens a style subobject.

RegionTrans

Translates a region's coordinates.

RunSML

Runs an SML script.

StyleReadBitmapPattern

Reads a BITMAPPATTERN from a style object.

StyleReadLinePattern

Reads a LINEPATTERN from a style object.

StyleReadPointSymbol

Reads a POINTSYMBOL from a style object.

Miscellaneous functions.

SetMedian function

Works like SetMean but computes a median instead.

Classes.

The object-oriented concept of classes has now been introduced into SML and previously used in a revised Spatial Display process introduced in V5.80. Classes are types of variables that hold what functions return. That is, a function returns a class handle that is assigned to the class variable. This class handle can be passed to other functions in your script that need to use the handle. Members can be read only, write only, or read and write. This is specific to the class and member. Class and member names are case insensitive in SML.

Some classes have members that give information about the class. These members can be string, numeric, or other classes. For example, the class COLOR has members Red, Green, Blue, and Name. For example:

class COLOR red;

red.Red = 100;

red.Green=0;

red.Blue=0;

or

red.Name="red"; # name from rgb.txt

The list of the first 95 classes provided with V5.90 follows. Obtain new versions of the SML process from microimages.com to obtain the new functions and classes that are being added weekly. You can see a complete list and descriptions of available classes from the scrolling list in the top pane of the Insert Class window.

BITMAPPATTERN

ButtonItem *

CADLayer

CallbackList

COLOR

ColorMap

CompositeWidget

CONTEXT *

DATABASE

DataTip

DBEDITOR

DBEDITORTABLE

DBEDITOR_SingleRecordView

DBFIELDINFO

DBEDITOR_TabularView

 

DBTABLEINFO

DBTABLEVAR

DiaglogShell

Disp

DispCallbackList

DisplayInfo

FILE

Georef

GPSData

GPSPort

GraphicsContext

Group *

Histogram

INIHANDLE

Layer

LayerManager

Layout *

LINEPATTERN

LineStyle

LMComponent

MapProj

Mat3x3

MdispRegionTool

MdispTool

OBJECT

PinmapLayer

POINT2D

POINT3D

PointStyle

POINTSYMBOL

PolyStyle

PORT

Prompt

PromptNum

PrompStr

PushButtonItem

RASTER

RASTERINFO

RasterLayer

RECT

RegionLayer

RegionTool

StatusContext

StatusHandle

STYLEOBJECT

TextStyle

TIMER

TINLayer

ToggleButtonItem

Tool

ToolCallbackList

TOOLTIP

TransParm

VECTOR

VECTORINFO

VectorLayer *

VectorLayerLabels

VectorLayerLines

VectorLayerNodes

VectorLayerPoints

VectorLayerPolys

View

View3D

ViewPoint3D

Widget

XmBulletinBoard

XmCallbackList

XmDrawingArea

XmForm

XmFrame

XmLabel

XmManager

XmPrimitive

XmPushButton

XmRowColumn

XmScale

XmScrollBar

XmScrolledWindow

XmSeparator

XmToggleButton

* indicates Class modified after V5.90 CDs duplicated.

Encrypting SML Scripts.

This description of the proposed encryption methods appeared in the V5.80 MEMO but is repeated since they are now officially available in V5.90.

IMPORTANT: TNTlite is a FREE product. As a result, SML scripts cannot be encrypted in any way in TNTlite. Also, encrypted scripts cannot be used in TNTlite. Unencrypted scripts which fit within TNTlite can be created, distributed for free or profit, and used in TNTlite.

SML scripts produced in V5.90 are public and can be used with any TNTmips, TNTedit, and TNTview key and openly read by anyone on any platform. An option to encrypt scripts is currently being implemented and should be available by the time you read this MEMO. Check with software support for its status if you are ready to use this feature. To encrypt a script, you will simply choose a script and the encryption option and designate who will be able to use the script. An encrypted script will then be created and saved for your distribution.

The contents of encrypted scripts cannot be read by anyone but still function just as any public SML script. Encrypted scripts will still run in any TNT product designated by you (including TNTlite if the size limitations are observed). However, encryption will allow you to control the distribution and use of any of your scripts which contain proprietary ideas. For example, your objective might be to prevent the unauthorized use of a script you sell, hide key concepts in a free script, distribute scripts which can only be used by authorized users, and so on.

Options. The following options will be available for you to select for creating an encrypted script:

1) Encrypted Only. With this choice, the encrypted script can be run with any TNT key. For example, this kind of script can still be distributed via the SML exchange on microimages.com. In this case, only the content of the script is protected, although it cannot be used by anyone who does not have a TNT product key.

2) Encrypted with a Password. An encrypted script could be run with any key if the password you set up is provided. But, the script itself cannot be read or modified. Using this option, the script can be placed on a web site or a CD for distribution but only used by someone who later secures a password from the developer of the script by purchase or some other basis. Note, however, that only a single password is used, so it could get into circulation and provide unauthorized use of the script.

3) Specific Key Protection. In this case, MicroImages extends the protection of its hardware authorization key to your encrypted script. When you encrypt the script on your TNTmips, you enter the key number of the specific key which will be authorized to run the script. The process then produces an encrypted script which is usable only with that specific key. This approach means that you will be distributing each script individually, and it will only run with that specific TNT product. Unless the key is stolen, no one else can run your script.

4) Password and Key Specific Protection. In this case, you provide both the key number and a password(s) for the script. This kind of script can only be run by those who have the specific key and are authorized to have the password.

SML scripts using options 3) and 4) may sound complicated to manage and distribute one at a time. However, the protection extended to your valuable script would be very hard to beat. It would also be possible to subsequently sell this kind of script automatically over the Internet but still secure one user protection. For example, a buyer of a script could simply submit the script name, electronic return address, payment details, TNT key number, and optional password to your server via an electronic form. The encrypted script is then automatically created and sent back for immediate use.

Naming Conventions.

The following standard naming conventions will now be used by MicroImages in SML. We suggest that you adopt these conventions for your scripts and to help you read scripts which are created by others.

1) Objects. Capitalize the first letter.

for example, Vector, Raster, CAD, TIN, Region, Object

2) Multiple parameters. If a function takes more than one of the same thing, number them.

for example, Raster1, Raster2, ... RasterN

3) Non objects must start with a lowercase and can be mixed case thereafter.

for example, numberElements, xCoordinate, and so on

4) String parameters must end in a "$".

for example, name$, groupName$, and so on

5) Class variables are _not_ objects. They start with a lowercase letter as in #3.

for example, parent, widget, dialog, ...

6) Allowed abbreviations (this may grow).

gc (graphics context)

georef (georeference)--this is a handle so it is a number

min, max

New Sample Script.

An unusual new sample script has been posted on the script exchange at /sml/repository/Coastal_Bays for your use and modifications. Its implementation was funded by the Maryland Geological Survey. The plate attached entitled Coastal Erosion Rates Using SML describes its special application where maps of historical coastlines are available. Perhaps you can use and modify this script to quantify changes in the boundary of natural features of interest to you (for example, ecotones, river channels, flooding levels, …).

Modifications since V5.90 CDs.

As was the case in V5.80, developments have continued along at a rapid pace in SML since the V5.90 CDs were shipped out for duplication. New functions and new classes have been added. If you are actively working with SML to develop internal scripts, TurnKey Products, or APPLIDATs, you should plan on frequent upgrades during the next quarter from microimages.com to keep up with these additions. A new kind of TKP sample script has also been created (see below) to illustrate the construction of a field data logging product using both orthoimages and GPS inputs. You can get the latest beta version of this script via software support, and it will be posted up on the SML script exchange at microimages.com as soon as it is in reasonable shape.

New Functions. The following additional 23 functions have now been added in addition to the 237 provided on the V5.90 CD. Please download the latest version of SML to get a suite of brand new, powerful functions just added for reading multiple GPS devices and using their coordinates in your scripts.

Georeference Functions.

GeorefSetProjection

Sets the projection of the class Georef.

Database Functions.

TableReadFieldNum

Reads a number from a table (using class DBTABLEINFO).

TableReadFieldStr

Reads a string from a table (using class DBTABLEINFO).

TableWriteRecord

Writes values to an existing database record.

File Functions.

CopyFile

Copies a file.

 

Widget Functions.

CreateOptionMenu

Creates managed option menu.

Display Functions.

CADLayerGetObject

Sets a CAD variable to point to the CAD object from a CADLayer.

GroupAttachHorizontal

Sets horizontal position of display group in layout.

GroupAttachVertical

Sets vertical position of display group in layout.

GroupQuickAddRegionVar

Quick adds a region layer given a region variable.

GroupRead

Reads a saved display group from a file.

GroupSetActiveLayer

Sets the active layer for a group.

GroupWrite

Saves a display group to a file.

LayoutCreate

Creates a display or hardcopy layout.

LayoutRead

Reads a saved display layout from a file.

LayoutWrite

Saves a display layout to a file.

PinmapLayerFindClosest

Returns the class DBFIELDINFO for a given field in a PinmapLayer.

RasterLayerGetObject

Sets a raster variable to point to the raster object from a RasterLayer.

RegionLayerGetObject

Sets a region variable to point to the region from a RegionLayer.

TINLayerGetObject

Sets a TIN variable to point to the TIN object from a TINLayer.

VectorLayerGetObject

Sets a vector variable to point to the vector object from a VectorLayer.

ViewZoomToGroup

Zooms so that a given group fills the view.

ViewZoomToLayer

Zooms so that a given layer fills the view.

New Classes. The following seven classes have now been added in addition to the 96 provided on the V5.90 CD.

GPSCallbackList

GroupXPosn

GroupYPosn

MenuItem

PointTool

PrintParms

XmOptionMenu

 

 

Future Plans.

Suites of surface modeling and image classification functions are scheduled for routine addition. Import and export functions are also now badly needed. A simpler, alternate, ArcView layer control panel scheduled for the TNT products will become available for use in SML. An expanded form template procedure will be added to control data entry in the TNT products and SML in particular. It will enable more user friendly forms (in place of dialogs) for field data logging and database creation and editing.

Submit Your Requests.

MicroImages will try to add the new low level functions you need as you request them. Several such one-on-one functions were provided for clients who requested them since V5.80 was shipped and are available to all in V5.90. However, please note that it may not be easy to add new TNT macro or process-like functions you request due to their complex nature or current code structure. For example, a request was made for a watershed analysis function. The TNT watershed process has about 12 steps which must be done in a selected order. Thus, the request for this macro function has gone unanswered for the time being in deference to other higher priority and more easily added functions needed in SML.

In requesting a new function, please understand that MicroImages has set priorities on the creation of new SML functions which support the interests of all clients in general. As a result, your function may or may not be assigned an "as-soon-as-possible" priority, but you will be promptly informed of its priority and can easily check it on the on-line index maintained at microimages.com. The following general criteria will be used to assign your function one of two priorities.

1) High priority (in other words, "available within the next several weeks") will be assigned to those functions which are relatively easy to implement and of general interest to others.

2) Low priority (in other words, "put in with the other 1800 new feature requests") will be assigned to those functions graded as difficult and time consuming to implement and/or of limited interest to the general user of SML.

3) As a corollary, if your function is assigned into 2) above (lower priority) you can ask for a cost estimate for moving its priority from 2) to 1) above.

Scripts for Hire.

In general, MicroImages would prefer that those who wish to hire out their script writing consider using an outside consultant. This would enable MicroImages to concentrate upon creating new and better tools for use by all. However, if MicroImages contracts to create a script(s), it will be placed as another example script in the public domain, not-copyrighted, un-encrypted, and distributed with the TNT products without cost.

If you want to contract for private SML scripts, there are a number of very experienced consultants who have already developed complex scripts for their own use or for others. These consultants not only know how to design and write a complex SML script, but also have the experience in geospatial analysis needed to design a complex application. Please consult one of these experts if you wish to have a private SML script developed for your own use or sale.

 

 

Ray Harris

11878 Arborlake Way

San Diego, CA 92131

phone (619)592-5013

FAX (619)592-5407

email ray.harris@gdesystems.com

Jurgen Liesenfeld

Programm und Raum

51 Heuduckstrasse

Saarbruecken 66117

GERMANY

phone (4968)1584-8168

FAX (4968)154-976

email j.liesenfeld@saarnet.de

Ray McClain

Moss Landing Marine Laboratories

P.O. Box 450

Moss Landing, CA 95039

phone (408)755-8682

FAX (408)753-2826

email mcclain@mlml.calstate.edu

Jack Paris

Paris and Associates, Inc.

1172 South Main St., #255

Salinas, CA 93901

phone (408)769-9840

phone (408)582-4221

FAX (408)769-9840

email paris@redshift.com

Paul Pope

601 S. Orchard St., Apt. B

Madison, WI 53715

phone (608)266-5285

phone (608)255-2233

email papope@students.wisc.edu

Karl Tiller

PSC, GmbH

18-20 Ursulum

Giessen 35396

GERMANY

(4964)148-598

FAX (4964)149-2785

email PSC@compuserve.com

 

If you would like to add your name to the above list of consultants for hire for creating SML scripts, please communicate this to MicroImages. In the future, this list will be maintained for your reference on microimages.com in the same area used for posting sample scripts, some of which have been contributed as sample work by these consultants.

 

Help: We need the names of more clients, students, programmers, and other potential partners who are experienced at creating SML scripts and will hire out to do them for money, royalties, samples, … The United States is short on programmers at the moment (in reference the year 2000 adjustments), but those of you in other nations may wish to offer your services.

 

* Geospatial APPLIDATs.

What is an APPLIDAT?

MicroImages is introducing in V5.90 via SML a new geospatial product concept called an APPLIDAT (APPLIcation plus DATa). What is an APPLIDAT? It is a MicroImages-invented word representing a combination of a software application(s) and the data upon which it will automatically operate.

Some of you can remember way back to the DOS MIPS era when you used the SAVEFILE and RESTFILE utilities. RESTFILE was a kind of early step toward an APPLIDAT in that it had the software to restore a file on the floppy disk with the data which was compressed and stored in the saved file. Thus you never had the wrong RESTFILE software version for the data in the specific saved file. Furthermore, executing the RESTFILE program caused it to automatically identify and use the accompanying saved data file.

More recently, everyone has become familiar with self extracting zipped data files. These single files contain data which is compressed and the specific executable to decompress them. To the appropriate operating system for which they were created, these data plus application files look like executable files. They can be started from an icon on the desktop and will regurgitate the compressed file in its uncompressed form.

The unique MERLIN TNTatlas CD samples prepared by the Maryland Department of Natural Resources and MicroImages, the prototype 1, 2, and 3 San Francisco TNTatlases, and similar proprietary TNTatlases produced by some of you, are examples very close to an APPLIDAT. The TNTatlas APPLIDAT CD delivers very large and complex geodata sets with appropriate software for all popular computer platforms. The Maryland Department of Natural Resources reports that even though it has distributed "thousands" of these Merlin TNTatlas CDs, they get very few calls on how to use them, as has also been the case with those distributed by MicroImages. They state that this is due to the fact that those who are provided with this MERLIN sample atlas have little difficulty installing and using it.

How do they work?

APPLIDATs allow you to create very easy-to-use, self-contained, turnkey geospatial products of your own. The simplest feature of APPLIDATs is that the end user does not have to hunt for their geodata. The geodata in the Project File you prepare for them is automatically used by the turnkey SML applications you store in the same Project File. The user of an APPLIDAT does not need to know anything about the underlying TNT product, its operation, or how to find the geodata required for their local application. A simple, double mouse click on the icon representing the APPLIDAT Project File starts your product as follows:

1) TNTview product is started, but its interface is hidden.

2A) The SML scripts in the Project File automatically generate a toolbar menu in the large X window. This toolbar contains an attractive icon which can be selected to start each individual application (in other words, each SML script) making up your product.

2B) If only one application (one SML script) is present in the APPLIDAT Project File, the application is automatically started. A toolbar menu with the TNTview and Exit icons are also automatically created.

3) When the application is run or selected from the toolbar menu, it will automatically find and use the required objects in the Project File. These are usually the objects pertinent to that particular APPLIDAT user's location. However, these objects can be of any size (unless TNTlite is being used).

4) After the application (SML script) loads the objects, it can immediately present the first analysis step such as presenting drawing tools or whatever is required.

5) At this point, your application is running with the required data, and the end user can be given instructions via interactive "HelpTips" as to what to do next such as "push and hold the left mouse button and draw a polygon...".

In summary, a self-contained APPLIDAT can be designed as in the sample included with V5.90 to use a very simple startup procedure: 1) find and click on the icon for their APPLIDAT, 2) Select the icon from the toolbar menu representing the application of interest, and 3) pause long enough with indecision for the HelpTips to pop-in with instructions on what to do next. Furthermore, 2) above can be skipped if only one application is provided.

How can you use APPLIDATs?

Consultants. MicroImages has several categories of clients recognizable by their responsibilities in the application of geospatial analysis. Some of you are "power users" of the TNT products. You use geospatial analysis to complete complex projects for yourselves or your clients. APPLIDATs may not be of particular benefit to you. However, the many other additions to the TNT geospatial programming language described elsewhere will permit you to create more complex SML scripts to solve your unique problems in your projects.

Specialists. A second group of power users are those of you who use geospatial analysis to create geodata sets for use by others within your organizations. These other users (your clients) can be the bosses, managers, salesmen, exploration geologists, and many others in your organization who want to use, or should be using, geodata and simple geospatial analysis in their daily work. Some, especially managers, do not want to spend the time to become even casual users of geospatial software. However, they often have to add something into the geospatial analysis to make it work for them. For example, they need to add the location of a property, the outline of a field, the outline of a damaged area, their unique geologic interpretive skills, their final decision authority, or many other kinds of local "lore" and expertise which only they possess. APPLIDATs are the means by which you can use your expertise to empower them with easy-to-use geospatial analysis tools and the specific geodata they require.

Service Groups. A third group of you wants to prepare some kind of geospatial application and geodata sets for sale or use by others. The use of APPLIDATs addresses this idea of bundling up and selling low cost site-specific geodata and associated applications as your own commercial products. The key to successfully introducing such products is to locate groups of potential end-use clients with a common need for access to geodata, your expertise in preparing it, and the common need to input information into the final analysis which is only available locally. A simple example of this would be APPLIDATs for farmers. They require geodata in the form of site-specific and preprocessed imagery (for example, georeferenced), soil sampling results, soil maps, GPS positions, and so on. But, they are the business owners and must input the field boundaries, knowledge of past practices, availability of machinery and capital, and so on, prior to arriving at a precision farming management map for each field. This model of putting the local professional "into the picture" can be easily extended to many other disciplines: ranchers, foresters, the military, field geologists, geochemists, real estate brokers, tax appraisers, ... In fact, anyone who deals with field situations should want to maximize precious and expensive field time by using your APPLIDATs.

Show Me What It Is.

The best way to introduce this new concept is by way of a sample. The color plates attached describe the sample V5.90 APPLIDAT. When you install any V5.90 product (lite or professional) except TNTatlas, this sample, small, self-contained APPLIDAT will be installed as a demonstration. This sample contains several applications which make up a hypothetical product used in precision farming. When you use this APPLIDAT, please note that it has been designed to be used without instruction by a total novice. The interface design can be described as "discoverable", that is, its operator figures out how to use the applications as they proceed. MicroImages is not presenting this sample as a tool which you or anyone else will use. It is provided as an example of what can be done to introduce others (your customers, bosses, clients, ...) to using the geospatial materials you can prepare, without making them learn anything.

How does It Work?

The MicroImages geospatial programming language (SML) can now be used to prepare an APPLIDAT which has SML applications (for example, image interpretation, GIS analysis, field data logging, ...) and the specific geodata which they will automatically utilize (for example, RVC objects, database tables, ...). An APPLIDAT is actually a conventional, normal TNT Project File which has had its extension changed from the normal *.RVC to *.TKP where TKP represents TurnKey Product. However, for a TKP Project File to function as an APPLIDAT, it must contain one or several SML scripts designed to display, analyze, edit, ... the objects in that TKP Project File.

On Windows based systems, each APPLIDAT--that is, each Project File with the extension TKP--will show as a standard icon on your desktop. The name assigned to each icon is the name of the TKP Project File. When you select one of these APPLIDAT icons, it will automatically start TNTview (which is always contained in TNTmips or TNTedit) in the background, but the TNT product menu will not directly show. A colorful toolbar will appear in the big X window with icons representing each individual SML application script you have created to make up your product (your APPLIDAT). At the right end of the toolbar, an icon for accessing TNTview will appear automatically as well as an icon for exiting the APPLIDAT. Unless the user of the APPLIDAT specifically selects the TNTview icon, they will not know they are using a more complex product.

Each of the scripts which have icons on the toolbar are actually objects stored in the TKP Project File with an appropriate icon. When its icon is selected on this toolbar, the script can automatically use the geodata objects coupled with it in the TKP Project File.

 

NOTE: If you are using a Macintosh, no APPLIDAT icon will appear on your desktop as yet. This concept of running the APPLIDAT from your desktop will be added for Macintosh platforms after V5.90 ships. On a Macintosh platform, simply choose the Custom menu option or icon from the toolbar and then select the option APPLIDAT.

On UNIX and LINUX platforms, the concept of starting applications from the desktop is not available. To start APPLIDATs on these platforms, simply choose CUSTOM/APPLIDAT/Sample from the menu or start SML and select and run the SML named APPLIDAT/Sample.

 

What are some of the salient features?

APPLIDATs can have many unique properties, recreating many of the unique features of the TNT products (ToolTips, HelpTips, DataTips, object conversion, platform transparency, high level of interaction, integrated surface modeling, ...).

Ease-of-use. The sample V5.90 APPLIDAT is totally self-contained and should require no explanation for a user for whom you have already installed the "enabling" TNT product. Nevertheless, the TNT products are quite easily installed on both Windows and Mac platforms. The main operational actions of the APPLIDAT user are easily discoverable with the use of HelpTips, ToolTips, and DataTips. This does not imply that your APPLIDATs are simple minded applications. It simply means that the approach they use needs to be carefully designed and thought out from the viewpoint of their prospective user.

The user interface for an APPLIDAT needs to be carefully designed to require no retention of knowledge from one use to the next. It is important that the user of an APPLIDAT remember only that the last time they used one months ago that it was "pretty simple" to get through without reading a manual. An ideal APPLIDAT accomplishes a complex application but is so simple to operate (features are discoverable) that it is not necessary to remember how to use it (zero retention). Good APPLIDATs are also probably presented as a sequence of shorter scripts with specific objectives. An APPLIDAT with a single, large, complex script which tries to "do it all" will probably be harder for its user to follow.

Encryption. SML scripts can be encrypted as can the geodata objects they contain. Encryption can be carried to the point where only the end user with the password and unique TNTview key could use the APPLIDAT in any way.

Platform Transparent. Remember that Project Files and geodata they contain are platform independent! Your SML scripts are also platform independent! Thus, a product you create as an APPLIDAT is also platform independent. Let's make this as clear as possible: a single TKP Project File (an APPLIDAT = your product) will automatically run on any Windows, Mac, LINUX, or UNIX platform supported by the TNT products. You do not have to concern yourself with which platform your client will use to operate your APPLIDAT.

Object Sizes. The sample APPLIDAT installed with V5.90 uses objects which are quite small, so that even users of TNTlite can try it. However, please be clear that APPLIDATs running with a professional TNTview can use and manage objects of any size. The only limits on objects in any Project File are the maximum file size allowed by the particular operating system (for example, two gigabyte files in W95). In fact, your first exercise might be to alter this V5.90 sample APPLIDAT so that it starts up using your own large color 24-bit composite image or map and an associated elevation raster. Guidelines on how to do this are included in the "Instructions" script in this sample APPLIDAT.

You will find that in the sample APPLIDAT, the View windows still contain all the zoom, pan, and other positional and layer controls needed to deal with larger images. Simply replacing the small CIR color composite image in this APPLIDAT with one identically named and containing a whole satellite scene converts this APPLIDAT to map biomass for any field in the large image. The whole image would be initially displayed and the user instructed via modified HelpTips to navigate to a particular agricultural field and draw its boundaries. The second script will then automatically use this zoomed image position and result for its first view but would allow zooming out to locate other management assets. The third script could then be used (with the substitution of a larger elevation raster) to permit a 3D view of any field in the larger area.

Free at Runtime. APPLIDATs will run with the TNTlite versions of TNTmips, TNTview, or TNTedit only if the scripts and geodata are not protected by encryption and if the objects fit in the TNTlite limits. If you want to create free APPLIDATs to run with TNTlite, this is allowed, but only if both the data they contain and the scripts are not protected by encryption. MicroImages is willing to provide "free at runtime" APPLIDATs if you are willing to allow the contents of your scripts and their geodata to be publicly inspected and used. If you plan to protect the contents of your APPLIDATs (data and/or scripts) by encryption when you sell or otherwise distribute them, then they will need a professional TNT product such as TNTview to operate regardless of their size.

Distribution. Suppose that you wish to prepare and distribute site-specific geodata as APPLIDATs for 50 clients weekly. Let's take the V5.90 sample APPLIDAT as an example of the type of product to be shipped to each of 50 farm clients. The only thing that varies from week to week is the site-specific CIR color composite images which are prepared with the many other tools in the TNT products. Suppose at the beginning of the season you prepare a Project File for each farm client. These Project Files are placed in one directory and contain the SML scripts and the static geodata for that farm, such as an elevation raster of the general area.

During the growing season, a large area satellite image prepared in TNTmips can automatically be chopped into segments using vector objects containing the outline of each farm (not the individual fields) and the associated client database. These image segments could be automatically combined with the appropriate SML scripts to produce the TKP Project File for each client's weekly APPLIDAT. These site-specific APPLIDATs could then be delivered by Internet, CD, or other media to the specific user. If each is encrypted and uses a client password tied to a TNT hardware key, then only your client could use their APPLIDAT regardless of how many others they obtained from your CD or web site.

Future Plans. The TNT Project File is already a very robust and unique container to convey all kinds of geodata with your APPLIDATs. It provides a platform independent means of conveying all common kinds of geodata to the APPLIDAT, including a full relational database.

MicroImages will continue to expand the functions in SML, allowing more complex kinds of APPLIDATs. For example, a high priority at the moment is the conversion of a suite of all the surface fitting methods in TNTmips into SML functions.

The HelpTips are an example of a new feature which can be used to create an easy to use, discoverable product. But, more interface design components are still needed. For example, there is a need for a simpler layer control legend similar to ArcView and which the APPLIDAT user can easily understand to control the data layers in their view (yes, this will also be available directly in the TNT products).

MicroImages can gradually modify the TNT processes to more automatically deliver geodata into APPLIDAT form for each client's local uses. The basic support capabilities are already in many of the needed processes, such as the use of regions to cookie cut rasters and vectors into Project Files; the concept of a virtual mosaic (only the region requested when needed); control of each operation with database queries (in other words, who gets which cookie).

Finally, the Internet can be used to provide each of your clients with their unique and current APPLIDAT in a timely, but confidential fashion (for example, within hours of the image collection). This might take the form of an atlas web server which functions as a simple, but fast map and image server and can also be extended to locate, prepare, and deliver your APPLIDATs for a specific area upon request and payment via a browser. In this case, the APPLIDAT would be prepared automatically when ordered and paid for by interaction with your web site.

Generic Geospatial Products.

With all these new ideas about APPLIDATs, please do not lose sight of the use of SML to prepare applications and products which use generic geodata in Project Files or other formats. At the moment, your generic products are restricted to using geodata in project files or in other generic formats linked into project files (TIFF, JPEG, DEM, DOQ, coverages, E00, shapefiles). However, there is now a priority placed upon creating the SML functions to read and write other geodata formats in scripts.

The SML scripts in the sample APPLIDAT supplied with V5.90 can be quickly modified to allow them to present the standard object selection windows needed to navigate and select the objects required from generic Project Files. By adding these variable inputs and outputs, your SML based product is no longer an APPLIDAT, but a generic product working with general geodata as required. MicroImages realizes that the current navigation through directories and specific objects is not easy for beginners and will soon be working to replace this in the TNT products and SML scripts with a more windows oriented approach.

It is important to realize that even though the user of these generic products selects their own geodata, you can still use many of the new features added for APPLIDATs in V5.90. You can create a TKP Project File without any geodata and with only SML scripts. This will create an icon on the end user's desktop which represents your product. This icon can bring up the same toolbar menu and icons as the sample APPLIDAT. However, when the user of the product starts a script by selecting an icon, they will be prompted by your modifications to navigate to and select the generic objects needed for the application from other project files.

Marketing Considerations.

You and MicroImages are constantly being called upon to compete with ArcView and MapInfo add-on products. Runtime MapInfo and ArcView based products are still software applications. You still must learn how to run each software product, secure the appropriate geodata of the proper kind, format, ..., then work to some sort of solutions.

APPLIDATs are really a different kind of idea and product designed as more of a new information delivery system rather than another competitive software product. Each APPLIDAT is a unique product custom prepared for a particular individual. The SML script(s) will be identical for a group of potential users, but the geodata is unique. Since site-specific geodata is built-in, an APPLIDAT can provide its user with site-specific information while easily incorporating their required and unique local data (for example, the field boundary, crop type, planting date, ...).

By V6.00, it will even be possible to create TurnKey Products (TKPs) which behave quite similar to MapInfo and ArcView products. One SML component which will be added to meet this objective will be a new View window with legend-like layer controls (similar to a standard ArcView view). In addition, the functions needed to import and export from other popular file formats will be provided in SML. For this purpose, MicroImages is also working on the direct import of the MapInfo TAB files for direct use in SML and TNTedit. So, by V6.00, your TKPs could automatically use shapefiles, TAB, DOQ, DEMs (importing them as needed), and generate a view whose operation would be familiar to the users of ArcView. TKP products could also be designed to produce (by automatic export) geodata of immediate and direct use in these other products. However, even though your TKP may begin to resemble these other products in appearance, it will retain the internal operational power of the RVC Project File (for example, pyramiding rasters, mixed geodata types, projection transparency, ...) and the many other unique and powerful properties of the TNT products.

* Field Data Logger TKP.

Introduction.

This public script is being prepared for the use of the Maryland Department of Natural Resources (MDNR). They have a project for members in the AmeriCorps program started by President Clinton. In this program, those in need of a job work for nine to 10 months on a national service project for a minimum wage and earn college tuition credits. MDNR is furnishing oversight and logistics support for members of one of AmeriCorps' projects. Their task is to annually walk all the stream channels in Maryland and document sources of environmental problems. At present, these observations are recorded on standard paper forms with positions located on 11" by 17" printed reference images. Subsequently, in the office, the observations are keyed into a database. Then the plotted field locations are found on the same images in TNTmips and translated into latitude and longitude for entry into the record representing each sample. This data logger will be used to take the CIR orthophotos of Maryland into the field via TNTview and automate this process.

Remember, MicroImages is going to build these models and you are going to adapt, rewrite, and extend them into your custom projects and products. Thus, you can use this script as a model to build many other colorful, fun, attractive field data logging scripts as TKPs (in other words, navigate to the geodata) or APPLIDATs (automatically loading geodata views including via a TNTatlas). It is being designed so that ultimately it can be operated entirely without a keyboard, using a stylus instead. The next sample script of this type will be to expand this script into another variation which can be used in the field for forest or agricultural mapping and sketching, integrate GPS positions, log into database tables, and so on.

 

IMPORTANT: All scripts developed by MicroImages for anyone will not be encrypted and will be in the public domain. These scripts can be used or modified by you without asking permission. Since they are not encrypted, they will operate in the FREE TNTlite if the geodata sets conform to its spatial limitations.

 

How does it work?

This is the goal of this sample data logger. Starting the TKP or APPLIDAT starts a TNTatlas showing the CIR orthoimages of a county in Maryland. The GPS position appears on this view as a cursor (if you are in Maryland; if not see comments below on simulation). Moving around will cause the cursor to track your changing GPS position on the CIR orthoimage including scrolling the view automatically. The cursor for the mouse is also present so you can zoom, pick icons, use the navigator tool to move up and down through the layers in the TNTatlas, and so on.

Table Selection. Concurrent with this image positioning and view activity, a toolbar is shown with 11 big icons. These icons represent the 11 kinds of stream conditions that the volunteers are locating and documenting. Each represents one of the 11 detrimental stream conditions which the designers of this survey wish to inventory when walking up the stream (for example, Broken Pipe, Stream Blockage, Bank Construction, …). Each also represents a separate database table to be built up using this script in the field. Using the mouse to select an icon will expose the dialog box needed to construct a record in the table for that stream condition. Each dialog box and associated record create an average of 20 fields/parameters with most stored as strings.

Table Entries. When the dialog box is exposed, its fields for the latitude and longitude will automatically default to the current GPS position. Almost all the other data fields in the dialog boxes support multiple choices (as any good survey should). Thus, selecting a category button in the dialog will "drop down" the choices. Picking a choice will insert it into the data box for that parameter and ultimately into that field in the record. This survey does require the entry of a couple of numbers into each dialog box (for example, outflow pipe diameters). "Dial a number" gadgets are being designed to input these via a stylus when a keyboard is not available. When the dialog box is completely filled out and correct, checking "OK" closes it and creates the new record. The symbol for this new observation (record) and all previous records in all tables are also pin-mapped if they occur in the current view.

Locating Positions. It may be that you cannot occupy the actual field position with the data logger to obtain its GPS coordinates. For example, the observer may be wading up the overgrown stream bed (no GPS signals available), and the second party with the data logger is out in the open up on the bank. Assume also that the GPS is just a simple, cheap unit accurate only to 30 meters. The Maryland CIR orthoimages displayed in the view are more accurate than this. Under these circumstances, the position of the stream condition can be more accurately located on the orthoimage, and this is supported.

Before selecting one of the 11 icons for a condition report, you can use the mouse to move the active location or position from the GPS cursor to some other "photo-interpreted" position on the screen. The GPS cursor will still show and track the movement of the unit, but will change color to indicate it is not the active position that will be inserted into the table. The second, new cursor controlled by the mouse will appear where you click the mouse to point to the location of the feature. It will now be the active position and will be logged into the record. Keep clicking the mouse, and the active position will move. This approach allows you to take the position from the GPS unit or the image. Remember that soon we will have one meter satellite images to use in a similar fashion in areas and nations where orthoimages are not available.

Editing Positions. All previous records in the 11 tables show as color symbols if they occur within your current view (a normal pin-mapped view). Move the cursor to an existing pin and click on it, and the dialog box for that table will automatically open and present that record for the selected pin, and the symbol for that pin disappears. You can edit the fields in that dialog box if you choose by the same methods they were entered (for example, multiple choice). You can edit the latitude/longitude coordinates directly, use the mouse to select the active position, or take the GPS as the active position. Once the pin is selected and the table is open, the only difference in procedure is that the defaults in the table are not the same as if it were a new pin and record.

Using Microsoft CE Operating System.

The APPLIDAT in V5.90 and the data logger TKP described above will lead to more questions about using the TNT products with the Windows CE operating system, especially as you begin to see how the keyboard could be eliminated. CE is a deliberately crippled version of Windows upon which the TNT products will be unlikely to ever run. For example, CE does not support floating point and other features critical to the TNT products. While CE is not getting good press due to these shortcomings, please remember it was designed to provide a familiar interface to run in TVs, autos, appliances, and so on. If Microsoft gives in to pressure to expand CE, then they just end up with W95 or W98 anyway.

Any reason to use the TNT products to run in CE is rapidly disappearing. You think of CE because it is used on some palmtop machines which have or are about to get color screens at 640 by 480 pixels and 256 colors. But, if you will consult some of the latest appropriate magazines, you will find a suite of field data loggers, big palmtops, and things that look like flat color screens. These new devices support 640 by 480 or 800 by 600 pixels and W95. Several have no keyboards and use a stylus. Personal Computing for April 1998 features several articles on these kinds of devices as well as excellent subcompacts of 1+ kilogram (for example, latest Sony, Toshiba Libreto, …). These are all available now to take to the field to run the TNT products, TKPs, and APPLIDATs. Within the next year, these products will rapidly expand in capabilities so that none of us will even consider CE as a possible choice when full W98 is equally available on the same miniaturized hardware.

Convenient Test GPS Unit.

By now you may be thinking about your need for a GPS unit to experiment with these rapidly expanding features in the TNT products. These include data logging and your other SML applications, the use of GPS inputs directly in TNTmips and TNTatlas in V5.90, and the new features outlined above already added into V5.90+. MicroImages proposes that you start small and buy one now for your experimentation. We suggest the EAGLE Explorer used in the sporting and recreation industry and outlined below. Remember that in general, these low cost GPS units do not differ that much in uncorrected positional accuracy but in price, convenience, and form.

The EAGLE unit is recommended as it has been found in our experimentation with GPS that it is very, very handy to test the data logger scripts developed with GPS input, TNTatlas designs, do demos, and so on, when you are inside where no GPS signals are available. It hooks to your serial port via cable and communicates via NMEA protocol supported by the TNT products. Indoors, it has a mode where you can manually enter a latitude and longitude position which it will treat as if it was being read outdoors. You can then send this position to the TNT product via the serial port in NMEA format. You can then enter speed and direction and move the position, sending new coordinates. This allows you to simulate indoors what would happen in the actual field locations of interest even if they are far away.

MicroImages has several GPS units of varying quality, but so far the Eagle Explorer is practical, inexpensive, and widely available around the world. This US$175 unit looks something like a small hand held calculator. It has its own internal batteries, which are rechargeable and run for a long period. The antenna is inside this hand held unit. A single US$20 light weight serial cable leads from this unit (which could be mounted on your hat) to the serial port of the computer. More information on this unit can be found on their web site at eaglegps.com. The web site also contains a geographic dealer locator so you can find an international dealer near you. Eagle Electronics can also be reached by phone at (800)324-0045 or by FAX at (918)234-1710.

Internationalization.

Improvements.

The characters in window title bars can be localized for the W95 and NT platforms. You will need to install the international TWM file to get this support. Use the "Install miscellaneous items" option in the setup program, select "Install special internationalization options", and choose "Install International TWM to install localized window titles (BETA)". This beta version prevents MI/X from closing automatically when a TNT product is exited. Simply close MI/X using the Standard Windows icon.

The units of measurements can be localized by translating the units.txt resource file.

The documentation for all the SML functions is in a resource file "smlfuncs.txt" that can be translated. However, this would be a lot of work, and this material is evolving rapidly, so you may wish to concentrate your efforts on translating or maintaining version compatibility in the TNT products' interface.

Translation Aids.

MicroImages has outlined a series of utility programs to assist you in the creation and maintenance of your translation of the TNT products into another language. A "Dictionary" and a "Merge" utility are now available and will be posted on microimages.com in the "localization" area for those who are interested.

Dictionary Utility. This "Dictionary" utility is most useful for those who are undertaking a complete translation of the TNT interface by translating the resource files. When using this utility, you select all of the English TNT resource files of interest, and it builds a new file containing all their English words and the frequency of their occurrence. Articles, numbers, and other common words are eliminated. You can then study this file before embarking on your translation to determine which English words are common and important in the interface and how you are going to translate them. You can then edit this file to provide the initial equivalents for these words in your language and perhaps a brief definition for future reference. While you will find that the translation of some of these words may need to be changed when you read them in context, this initial effort in building a dictionary will provide a basis for planning your approach and save a lot of time in the long run. For example, the dictionary will be available again in three to four months when you need to translate the changes in the resource files for an update, for use with other English based geospatial materials, for Getting Started tutorials, and so on.

Merge Utility. A "Merge" utility is now available to assist you in easily identifying those entries in the resource file which have been added or changed when compared with your previous translation of an earlier version of the TNT products. Merge reads each of your old versions of translated reference files (for example, for V5.80) and merges with them into a new file any newer version of English entries (for example, for V5.90). You can then load the merged file into your editor, visually scan for English entries, consult your TNT dictionary, and translate just those English entries.

An option in this utility allows saving a new "changes" file of only those English additions detected in the newer all English file. You can then translate all the entries in this smaller changes file. The Merge utility can then be used as noted above to insert these translated changes into your earlier translated file.

Others. MicroImages is now looking for a good public domain "multilingual word processor or text editor" to provide which uses UNICODE, can show English in one window and any other language in the other, and can handle mixed languages. It would be especially convenient if it was in Java, as it is UNICODE based and platform independent. Please nominate any you have found, as writing one would take time away from creating other capabilities in the TNT products. A "Purge" utility program is planned to delete unused orphan entries in any language in the resource files. If you have any suggestions for additional utility programs which would help you create or maintain your translations, please describe them to us.

Russian Localization Kit.

MicroImages prepared a Russian translation of the interface for V5.80 of the TNT products and posted it on microimages.com several months ago. The new Merge utility has been run on these Russian V5.80 resource files to insert the new English entries in V5.90. The combined English and Russian entries in the new files were counted and showed an expansion of 13.75%. These changes will be translated and V5.90 Russian resource files created and posted. For various reasons, it is likely that this is a higher than usual number of English additions. It is projected that on the average about 10% of the user interface of the TNT products will need to be translated for each new release. Thus, after your dictionary and first translation into a language is completed, maintaining it will be much easier, amounting to perhaps 5% of the first time effort for each new release.

Printing.

There have been no significant alterations to the printing process this quarter.

Installed Sizes.

Loading a full installation of TNTmips 5.9 onto your hard drive (exclusive of any other products, data sets, illustrations, Word files, and so on) requires the following storage space in megabytes.

PC using W31

77 MB

PC using W95

96 MB

PC using NT (Intel)

96 MB

PC using LINUX (Intel)

66 MB

DEC using NT (Alpha)

97 MB

PMac using MacOS 7.6 and 8.x (PPC)

89 MB

Hewlett Packard workstation using HPUX

96 MB

SGI workstation via IRIX

115 MB

Sun workstation via Solaris 1.x

84 MB

Sun workstation via Solaris 2.x

82 MB

IBM workstation via AIX 4.x (PPC)

105 MB

DEC workstation via UNIX=OSF/1 (Alpha)

120 MB

 

V5.90 of the HTML version of the Reference Manual including illustrations requires 34 MB. Installing all the sample geodata sets for TNTlite and TNTmips requires 154 MB. The 38 Getting Started Booklets require a total of 47 MB.

Upgrading.

If you did not order V5.90 of your TNTmips and wish to do so now, please contact MicroImages by FAX, phone, or email to arrange to purchase this upgrade or annual maintenance. Upon receipt of your order and processing, MicroImages will supply you with an authorization code by return FAX only. Entering this code when running the installation process allows you to complete the installation and immediately start to use TNTmips 5.90 and the other TNT professional products.

If you do not have annual maintenance for TNTmips, you can upgrade to V5.90 via the elective upgrade plan at the cost in the tables below. Please remember that new features have been added to TNTmips each quarter. Thus, the older your version of TNTmips relative to V5.90, the higher your upgrade cost will be. As usual, there is no additional charge for the upgrade of your special peripheral support features, TNTlink, or TNTsdk which you may have added to your basic TNTmips system.

Within the NAFTA point-of-use area (Canada, U.S., and Mexico) and with shipping by UPS ground. (+150/each means $150 for each additional quarterly increment.)

 

 

Price to upgrade from TNTmips

TNTmips Product Code

V5.80

V5.70

V5.60

V5.50

V5.40

V5.30 and earlier

D30 to D60 (CDs)

$250

450

600

750

900

+150/each

D80

$375

675

900

1050

1200

+150/each

M50

$250

450

600

750

900

+150/each

L50

$250

450

600

750

900

+150/each

U100

$450

800

1000

1200

1400

+200/each

U150

$615

1100

1450

1700

1950

+250/each

U200

$780

1400

1875

2175

2475

+300/each

For a point-of-use in all other nations with shipping by air express. (+150/each means $150 for each additional quarterly increment.)

 

 

Price to upgrade from TNTmips

TNTmips Product Code

V5.80

V5.70

V5.60

V5.50

V5.40

V5.30 and earlier

D30 to D60 (CDs)

$300

560

750

900

1050

+150/each

D80

$425

800

1050

1200

1350

+150/each

M50

$300

560

750

900

1050

+150/each

L50

$300

560

750

900

1050

+150/each

U100

$500

850

1050

1250

1450

+200/each

U150

$665

1150

1500

1750

2000

+250/each

U200

$830

1450

1925

2225

2525

+300/each

 

MicroImages Authorized Dealers

There were no new dealers added during the past quarter.

Discontinued Dealers

The following dealers are no longer authorized to sell MicroImages products for various reasons. Please do not contact them regarding support, service, or information about the TNT products. Please contact MicroImages directly or one of the other MicroImages Authorized Dealers.

GeoNova S.A. (Antonio Gomez) of Buenos Aires is discontinued.

ERIC Pty. Ltd. (Robert Gourlay) of ACT, Australia is discontinued.

Technical Solutions (Ian Cameron) of Brisbane, Australia is discontinued.

Computers

Each quarter there is a better computer to recommend at approximately the same price. The Gateway 'best for your money' computer is now a 400 MHz Pentium II with a 10 GB drive and 19" monitor. (Last quarter it was a 300 MHz Pentium II with 8.4 GB drive and 64 MB of memory.)

Best Power for the Price.

Gateway GP6-3000 ($3200)

Intel 400 MHz Pentium II
128 MB SRAM
512 KB internal cache
10 GB 9.5 ms ultra ATA hard drive
19" EV900 color monitor (.26 dp)
3D 64-bit nVidia AGP display board with 4MB memory
DVD-ROM and drive and I/O card
3.5" diskette drive
3 piece speaker set and Wavetable audio card
modem
Mid-tower case
keyboard and MS Intellimouse
W95, W98, MS Office 97 (w/o Access)

Web Site

Error Reports.

By now you are familiar with how to check on the status of your errors and new features at microimages.com at support/features. A new, easy to complete on-line error report form is also available at this location to assist you in reporting any errors you have encountered and do not find on the posted list.

Prices

There have been no price changes in the TNT products for this quarter.

[ Current Prices for TNT Products ]

Papers on Applications

Rewarded.

The following papers were each rewarded $2000.

It's Not Easy Being Green: Forest Developer Pursues Green Certification with GIS and Image Processing. by Kevin Corbley. Geo Info Systems, Volume 8, Number 2, February 1998. pp 32-35.

Plumbing the Murky Depths. by Kevin Corbley. GIS Europe, March 1998. pp 36-38.

Help From Above: Brush control made easy with infrared photography. by Kevin P. Corbley. Beef. Volume 34, Number 10, June 1998. pp. 48-52.

Other Papers.

Perspective adds value to aerial survey data. Getting best value out of aerial surveys requires a suite of geodata information and analysis tools, and a technique that is gaining popularity is the ability to display data from perspective viewpoints. by Richard DuRieu. Australia's Paydirt, February 1998, Vol. 2, page 12.

Airborne Versus Space Scanners. A case study of the advantages and disadvantages of using airborne and space based scanners. by Rob Gourlay. GIS USER, Feb-April, 1998. pp. 33-37.

GIS zur Unterstutzung des Precision Farming- Kostenoptimierung und Trinkwasserschutz - by Hans-Norbert Resch, Thomas Nette, and Matthias Trapp. GIS Aufsatze, Volume 3, 1997. pp. 10 -13.

Response of Nesting Ducks to Predator Exclosures and Water Conditions During Drought. by Lewis M. Cowardin, Pamela J. Pietz, John T. Lokemoen, H. Thomas Sklebar, and Glen A. Sargent. Journal Wildlife Management, Volume 62, Number 1, 1998. pp. 152-163.

Landsat Image Maps Shallow Caspian Coast. by Kevin Corbley. Imaging Notes, EOSAT Newsletter. Volume 12, Number 2, 1997. page 7.

PNG (Papua New Guinea) Urban Planning System. by Adrya Kovarch. Public Works Engineering, June/July 1997.

Early Warning/Detection System for Forest Fire Prevention in Indonesia. by Naoki Mituzuka Sawada and Tomoyuki Deda. Published in Japanese. 2 pp.

MicroImages launches versatile geospatial data editor. by Richard DuRieu. Australia's Mining Monthly, June 1998. pp. 55-56.

Promotional Activities

New TNTlite Flier.

A new TNTlite flier is enclosed for your reference. While the graphics have been redesigned, its content is quite similar to the previous model.

Posters.

A variety of new promotional posters is included. These posters, earlier posters, and the TNTlite flier are all posted on microimages.com in PageMaker 6.5 format for printing at any size up to the 30" by 40" size for which they were designed. Dealers or any other party can download any of these materials for printing on any color printer.

New Staff

Jason L. Rader.

Jason has joined MicroImages, spent his time with the Getting Started booklets, and is already responding to your software support questions. Jason received a B.A. in Geography in 1997 from the University of Nebraska at Lincoln. His special academic interests were in criminology and geospatial analysis. Before joining MicroImages, he had intern experience at the Nebraska Natural Resources Commission preparing digital elevation models, digital soil maps, and working with database materials.

Melanie B. Renfrew.

Melanie has joined MicroImages, spent her time with the Getting Started booklets, and is already responding to your software support questions. Melanie received a B.S. in Physical Geography in 1997 from Brock University in Ontario. Her specialized courses were in physical geography and computer science. Melanie worked summers and during school as a computer lab advisor and on related computer tasks including GIS.

Dr. Cynthia Philpott Brady.

Dr. Brady has joined MicroImages as a scientific writer and has been busily working with others trying to keep the Reference Manual current. She received a B.S. in Biology from Kansas State University in 1990 and a Ph.D. in Physiology from Colorado State University in 1995.

Thesis: Philpott, C.J. 1995. Rotational Dynamics of the Luteinizing Hormone Receptor Measured by Time-Resolved Phosphorescence Anisotropy. Colorado State University. 101 pages.

She occupied a postdoctoral position from 1994 to 1997 in the Department of Chemistry at Colorado State University. In 1995 she was also Instructor, Department of Biology at the University of Colorado in Denver. In 1996 she was also an adjunct faculty member in the Department of Veterinary Research at Front Range Community College teaching anatomy and physiology. She has contributed to published papers in Biochimica et Biophysica Acta (1995, 1997, and 1998) and Biology of Reproduction (1995).

Noteworthy Client Activities

Australia.

July 23rd, Dr. Jack F. Paris, Director of the Spatial Information, Visualization & Analysis (SIVA) Center at California State University at Monterey Bay, will deliver a keynote paper at the 9th Australasian Remote Sensing and Photogrammetric Conference in Sydney. The title of this invited presentation will be "Applications of Remote Sensing to Agribusiness". A longer version of the presentation will also be presented on July 24 at the agricultural workshop being held as part of the conference. This illustrated paper should be published on microimages.com by the time you read this.

Jack has been using TNTmips and its predecessor for at least 10 years. He will present his paper using PowerPoint and TNTmips using the layer controls panel. During the balance of his two weeks' visit following this presentation, Jack will consult with NSW Agriculture on their programs and activities. Jack can be reached during this trip in care of his host, Graeme Tupper, NSW Agriculture, (6126)391-3143, or tupperg@agric.nsw.gov.au.

Thailand.

Earth Intelligence Technologies Co. (EIT), the MicroImages dealer in Thailand, has published and distributed in Thai a printed TNTlite tutorial booklet of 177 pages. Sample pages from this excellent piece of work are enclosed. Since the economy in Thailand is slow, this company has taken this opportunity to invest money and staff in this effort to introduce TNTlite to the Thai universities accompanied by this excellent Thai language reference material.

This Thai book contains material and illustrations from 10 of the basic Getting Started tutorial booklets. It was prepared by starting with the PageMaker versions of these booklets as chapters. The illustrations were extracted and laid out into an 8.5" by 11" format and the associated text translated and reset in Thai. This produced a very attractive format and useful printed and bound book. Kudos to EIT for a job well done in hard times.

Japan.

The Forest and Forest Products Research Institute (Dr. Haruo Sawada) has recently used TNTmips and on images in a study to assess the impact of the major forest fires in Indonesia.

Finland.

Soil and Water Ltd., the MicroImages dealer in Finland, has just completed a long and complicated effort to assemble a GIS database of the Pechora Sea. This required the assembly of over 1000 pieces of tabular data, metadata, satellite and aerial images, and map objects into TNTmips. The effort was undertaken in 1992 on behalf of the Finnish-Russian Offshore Technology Working Group, which is made up of members from both nations. These Working Group members collected and contributed to this GIS the geophysical, geological, geotechnical, engineering-geological, biological, ecological, and hydrometeorological data for the sea. Additional information on this GIS data bank can be seen in the enclosed press release plate prepared by Soil and Water.

Abbreviations

For simplicity, the following abbreviations were used in this MEMO:

W31 = Microsoft Windows 3.1 or 3.11.

W95 = Microsoft Windows 95.

W98 = Microsoft Windows 98.

NT or NT4 = Microsoft NT 3.1, 3.5, or 4.0 (3.1 is error prone, and thus the TNT products require the use of 3.5 and its subsequent patches).

Mac = Apple Macintosh using the 68xxx Motorola processor and MacOS 6.x or 7.x.

PMac or Power Mac = Apple Macintosh using the 60x Motorola PowerPC processor and MacOS 7.x or 8.0.

MI/X = MicroImages' X server for Mac and PC microcomputer platform and operating system.

Appendix A

Previous Experience of Dr. Lee D. Miller in Hyperspectral Related Research and Development Projects.

Summary of Rangeland Spectral Measurement and Analysis Experience.

This research program was carried out by Dr. Lee D. Miller and his graduate students for the NSF International Biological Program's Grassland Biome Study. Approximately 2000 spectral radiance and spectral reflectance curves of .35 to .80 µm were collected by this author and graduate students from 1969 to 1972. These curves are for the grasses, larger plants, and soils which make up the shortgrass prairie and rangelands located in the central United States. Typical common names of the plants would be blue grama grass, buffalo grass, rabbit brush, sage, broomsnake weed, … These spectral curves were measured in-situ with a custom designed field spectrophotometer (see reference thesis below).

These curves were collected together with extensive ground control measurements as a scientific database to discover the concepts exploited today in the many remote sensing biomass and related vegetation indices. Almost all of these curves represent ground plots of .25 square meter from which the vegetation was subsequently removed for measurement of its wet and dry biomass, chlorophyll and other pigment concentrations, proportions of green and dead biomass, …

This library is referenced as an indication of the magnitude of effort that goes into the collection of vegetation oriented spectral reflectance libraries and appropriate control measurements. Trying to use this library today has several complications. 1) The spectral range of these curves is too limited for today's hyperspectral images of at least .4 to 2.6 µm. 2) The original digital storage media (computer tapes) decayed to the point that they were unreadable. However, a printed copy of each curve and the digital tabular values still exist. This material was borrowed and copied about a year ago by Dr. Compton J. Tucker (see thesis below) at NASA/GSFC who wanted to have a contractor reduce it again to digital curve form. The current status of this effort is unknown (contact Compton J. Tucker, NASA/GSFC, Code 923, Greenbelt, MD).

Design of a Field Spectrophotometer Lab. by Robert L. Pearson and Lee D. Miller. 1971. Science Series No. 2. Department of Watershed Science, Colorado State University, Ft. Collins, Colorado. 102 pages. (available as M.S. thesis of Dr. Robert L. Pearson from University Microfilms)

Remote Estimation of a Grassland Canopy/Its Biomass, Chlorophyll, Leaf Water, and Underlying Soil Spectra. M.S. Thesis of Dr. Compton J. Tucker. August 1973. Colorado State University, Fort Collins, CO. 212 pages. (available from University Microfilms)

Remote Multispectral Sensing of Biomass. Ph.D. Thesis of Dr. Robert L. Pearson. May 1973. Colorado State University, Fort Collins, CO. 180 pages. (available from University Microfilms)

Spectral Estimation of Grass Canopy Vegetation Status. Ph.D. Thesis of Dr. Compton J. Tucker. September 1975. Colorado State University, Fort Collins, CO. 106 pages. (available from University Microfilms)

 

Extraction of the Underlying Soil Spectra from Canopy Spectroreflectance Measurements of the Shortgrass Prairie. Compton J. Tucker and Lee D. Miller. Proceedings of the Third Annual Remote Sensing of Earth Resources Conference. March 1974. 12 pages.

A Three-Band Hand-Held Radiometer for Field Use. Compton J. Tucker, William H. Jones, William A. Kley, and Gunnar J. Sundstrom. Science, Vol. 211, 16 January 1981. pages 281 –283.

Sample of other Agricultural spectral measurement experience.

Dr. Miller and other graduate students have completed other agricultural field spectral measurement programs, some of which are referenced below.

Correlations of Rice Grain Yields to Radiometer Estimates of Canopy Biomass as a Function of Growth Stage. Lee. D. Miller, Kist Yang, Mike Mathews, Charles L. Walthall, and Russel Irons. 1983. Publication of the Nebraska Remote Sensing Center. 1883. 22 pages. Also published in the Journal of the Korean Society of Remote Sensing, Vol. 1, No. 1. 1985. pages 63-87.

This laborious study used several thousand hand held radiometric measurements of narrow spectral bands and associated canopy assays. It established that final rice grain yields could be predicted with ever increasing accuracy from canopy biomass estimates as the canopy developed. It also clearly established that overapplication of nitrogen (101 kilograms per hectare) drives rice into excess vegetative growth without increasing yield beyond that achieved by normal applications (34 kilograms per hectare).

Sample of Agricultural Image Processing Experience.

This research program of Dr. Lee D. Miller and students related estimates of hundreds of corn fields in Texas from LANDSAT MSS images to the actual in-situ samples of green corn canopy biomass. These correlations were repeated with multiple images as a function of time in the growing season.

Canopy Biomass Measurements of Individual Agricultural Fields with LANDSAT Imagery. Ph.D. thesis of Dr. Thomas D. Cheng. 1985. Dept. of Forest Science, Texas A&M University, College Station, TX. 199 pages. (available from University Microfilms)

Canopy Biomass Measurements of Individual Agricultural Fields with "HOTLIPS" System and LANDSAT MSS Imagery. Thomas D. Cheng and Lee D. Miller. International Conference on Computers in Agriculture Extension Program. Lake Buena Vista, FL. 12 pages. (HOTLIPS was a Z80 and CPM based microcomputer image processing system.)

Design of Spectral Measurement Equipment.

Dr. Lee D. Miller has provided the system design for the development of several field spectrometers and two and three band radiometers to measure biomass. Two of these field spectrometers covered the wavelength range of .40 to 2.60 µm and were built and subsequently commercially sold by Spectron Instruments, Denver, CO.

25 March 2009

page update: 5 Jan 12


Back Home ©MicroImages, Inc. 2013 Published in the United States of America
11th Floor - Sharp Tower, 206 South 13th Street, Lincoln NE 68508-2010   USA
Business & Sales: (402)477-9554  Support: (402)477-9562  Fax: (402)477-9559
Business info@microimages.com  Support support@microimages.com  Web webmaster@microimages.com