|
25 March 2009 |
page update:
3 Jul 12
|
View in PDF format
TNT
Products V6.9
December 2003
Table
of Contents
MicroImages in its
18th year in business is pleased to distribute RV6.9
of the TNT products.
This is the 54th release of TNTmips
and adds approximately 150 new features submitted by clients and MicroImages.
What follows is a brief summary of many of the significant new
capabilities in RV6.9.
?
Global Geodata: A DVD
providing global reference geodata is included with RV6.9
of your TNT product.
It provides several Project Files containing clean objects each of
which is world-wide in extent. These
include a high quality color image of the globe and a digital elevation model
of the continents both at 1 km resolution.
The vector objects include hydrography, boundary, transportation,
elevation, industrial, physiography, population, utility, vegetation, and data
quality features. These features
were prepared from 1:1,000,000 maps.
?
64-bit Products: 64-bit
versions of the TNT products are
now available for Mac OS X 10.3.2 for the Apple G5, for Sun Sparc 9.x, and
SuSE Linux 9.x and Windows beta XP for the AMD Athlon and Opteron.
?
TNTsim3D: The free TNTsim3D
geosimulation program for Windows now has a new compact icon tool interface.
Styled surface feature points can now be selected, moved, and saved and
their attributes edited. DataTips
set up in other TNT products can
also be used in a similar manner. Actively
running simulations can now use SML
scripts for tools and features such as route design and recording and to
interactively communicate with other non-TNT
programs.
?
TNTatlas: The free TNTatlas
publication tool can now be easily set up to use a custom, simplified
interface. Special purpose tools
such as data dependent queries created using SML
Tool Scripts or Macro Scripts can be managed and provided as part of the
layout(s) defining the specific atlases geographic structure.
?
TNTserver: Web clients
can now request that rasters be returned as lossy or lossless JP2 files or
lossless PNG files with transparency in addition to lossy JPEG files.
Vector content including attributes can be now be requested as an SVG
layout.
?
Tutorials: Four new tutorials booklets are available on the topics
of designing user interfaces in SML,
orthorectifying satellite images, geospatial science terms, and installing TNT
products. Twelve other tutorials
have been updated and expanded in scope to cover new features and ten more
have been updated.
?
TIFF Support: Auto-linking
for direct use can now be made to any supported TIFF file, not just grayscale
and RGB files. For example, links
can be made to TIFF files with a hierarchical structure of multiple images/rasters
and convenient autonaming is assigned. TIFF
export now permits multiple images/rasters of various data types to be put
into one TIFF file.
?
JPEG2000 Support: Lossy and lossless JPEG2000 support can be used
in TNT raster objects and in other
TNT processes.
For example, raster extract allows input objects in this compression
and will write out JPEG2000 compressed raster objects.
The import, export creation, and internal compression of a single file
and raster object using JPEG2000 compression has been tested up to 275 GB.
Files can be exported in the GeoJP2 modified JP2 format.
?
2D Displays: Thin lines are now antialiased. LegendView
is more compact. SML
scripts can be attached to display layouts and thereby automatically added to
the view?s tools and menus. Coordinates
readouts are simultaneously presented in two different coordinate systems.
A location can be zoomed to by entering its coordinates.
?
3D Displays: Two new faster and more accurate terrain rendering
methods are available: dense ray casting and variable triangulation.
?
Labeling Styles: The frames for labels can now have various shapes
and use single line or slender triangular leaders. The
boundary of the frame can be controlled in thickness and color.
The frame can be filled with transparent color.
Margins can be set for all four sides of the frame relative to the
text.
?
Shapefiles: Styles for shapefile points can now be imported and
exported.
?
Tabular View: Tabular views can be refreshed for tables linked to
in other RDBMS using ODBC or in Oracle using OO4O.
The refresh can be set to be manual, automatic, or controlled via SML
from another concurrent program in Visual Basic or some other language.
?
Satellite Image Orthorectification: Complete or partial QuickBird
and IKONOS satellite images ordered in their ortho-ready kit forms that
provide their rational polynomial coefficients can be easily converted to
orthorectified images. A DEM and
several accurate XYZ ground control points are the required inputs for this
procedure.
?
Georeferencing: The management, display, and analysis of the ground
control points being entered have significantly improved.
Overall statistics provide indications of the overall accuracy of the
control point collection. The
satellite image rational polynomial coefficient model has been added to
facilitate the input and evaluation of the needed XYZ control points.
?
SVG Support: Images
can be embedded in SVG layouts in JPEG, as well as the previously supported
PNG format. TNT
DataTips can be incorporated. TNT
layouts being converted to SVG can be clipped.
External stylesheets can be created.
?
Calibrated Color: ICM
and ICC color management can now be used to cross calibrate the color view on
the monitor with printers, scanners, digital cameras, and so on using the
standard (sRGB) color management built into the TNT
products long ago. The major
result is the optimal reproduction of the color on the monitor on any
available printer. Any
out-of-range colors on your monitor can be printed choosing from relative
colorimetric, perceptional, saturation, or absolute colorimetric rendering
models. How much of the color
range in the current view can be rendered on the color printer can be quickly
examined by a soft proofing option which limits the color monitor to the
calibrated characteristics of the color printer and alarms the pixels that
will be out-of-range.
?
Printing: Six, 7, and more color printers are supported in addition
to the ICM and ICC color management conventions.
?
SML Documentation/Examples: All 976 functions have interactive
access to documentation and example uses.
The 557 class methods for the 324 classes have interactive
documentation. Two tutorial
booklets are now available with about twice as much reference material on how
to work with SML (TNT?s
geospatial scripting language).
?
SML Debugging Tools: An
open script is automatically loaded into a second view for debugging.
This view provides icons to run, step through, pause, stop, show pseudo
code, and show timing. This view
can be used to insert breakpoints into the script and to review the time of
execution of the individual steps in the script.
?
SML and ActiveX: SML
can now accept callbacks from other non-TNT
programs and, in turn, communicate with some aspects of the current TNT
process. For example, a TNT
view can be redrawn to update its pinmap from a linked database.
Another application would be the use of Visual Basic to provide the
interface for the SML process.
Software
Longevity.
When
you spend a lot of money for a product you want to make sure it has longevity.
Companies spring up with new innovative ideas and then fad away or are
digested by the monopolies. Perhaps
this is why big software companies get bigger and bigger.
I know that MicroImages is not using many of the same software products
we used a few years back except for big name brands and none of the computer
brands except Apple. Fortunately,
with your support I have had the privilege and gratification of shepherding 54
releases of the TNT product out
the door and writing 54 of these MEMOs over the past 18 years.
However,
the true test of longevity of software is how long it remains viable to you by
adding new features and transcending hardware and operating systems.
MicroImages is pleased to have an active client who began using TNTmips
V1.0 and has just ordered his annual maintenance for TNTmips
RV7.0 and RV7.1. In this same
context the first state agency to order TNTmips
V1.0 now has 20 units and most are updated through RV7.1.
At the other end of the time scale, MicroImages has prepaid orders for
future releases out to RV8.0
(which means, to year 2010). Thirty
years is a lot of releases, loyalty, and trust from the users of any software
product. We appreciate your long
term support, confidence, and patience with our products.
Getting
the Word Out.
Advertising
our TNT products into the face of
a monopoly is a waste. As many of
you know, I am of the opinion that MicroImages? income is better spent to
advance the capabilities of the TNT
products for you, supply the best professional support possible, and more
recently to improve automated product testing.
Thus I greatly appreciate the excellent assistance our dealers and
clients give us in the promotion of our products.
Whether you help us promote our products and approach one-on-one with a
friend or in bulk as follows, I appreciate it.
Japan does 6500
TNTlites.
The Japanese
language periodical covering GIS is called GIS-NEXT (see http://www.c-crews.co.jp/gnext_express).
GIS-NEXT has a circulation of 6,500 (90% subscription and 10%
newsstand). The target audience
is in survey, construction, and design (33%); education (22%); government and
related 21%; IT and system integrators (15%), and individuals and others (9%).
The April 2004
issue will contain an article to introduce TNTlite
prepared by OpenGIS, MicroImages? Reseller in Japan (see www.opengis.co.jp).
The target audience will be beginners in GIS.
This same issue will bundle a CD containing TNTlite
V6.8 for Windows and Macintosh completely translated into Japanese: from
installation, interface, help, comments, strings in database and so on through
to publishing results as layouts, TNTatlases,
and TNTsim3D.
Additional tutorial exercise will be included using sample data
prepared on Japanese topics and locations as well as an expanded collection of
SML scripts tailored to special
Japanese interests and requirements, such as importing from the special
formats of a variety of Japanese-only geodata sources.
After this initial distribution, subsequent issues over the next couple
of years will contain a series of articles by OpenGIS highlighting various
applications of TNTlite.
Turkey does 9000
TNTlites.
Two illustrated
and bound books have been written around the use of TNTlite
in Turkish and 3000 copies have been previously distributed by HAT,
Geographical Information Systems and Trade, Inc., MicroImages Reseller in
Turkey (see www.hatgis.com.tr). One
book is entitled COGRAFi BiLGi SiSTEMLERi translating as Geographic
Information Systems (260 pages) and more information and some sample pages
can be viewed at www.microimages.com/i18n/_tr_turkish%20COGRAFI.htm.
The companion book is UZAKTAN ALGILAMA translating as Remote
Sensing (176 pages) and more information and some sample pages can be
viewed at www.microimages.com/
i18n/_tr_turkish%20UZAKTAN.htm.
As a result of
this initial effort, these HAT materials have become an important educational
resource throughout Turkey where most students can not use the English
language technology available in this area of study.
Due to the success and interest in these materials, HAT is just
completing the revision and update of these books and has scheduled a second
printing of 6000 copies of each to be distributed along with TNTlite.
England
Finalizing 5000 TNTatlases.
The Petroleum
Exploration Society of Great Britain (PESGB), a non-profit organization in
Britain, has recently ordered
5,000 DVD copies of the TNTatlas
version of the Millennium Atlas of the Petroleum Geology of the North
Sea.
This is being prepared for them in Britain
by a consultant and will be distributed free to members in April.
For a synopsis of the original 400 page paper atlas published in 2002
please see www.npd.no/English/Emner/Ressursforvaltning/the_millennium_atlas.htm
and other Internet sources. Recently
the extensive maps in this atlas have been reduced to vector form and can be
acquired in various formats for a fee. Now
they will also be viewable only as an autorun TNTatlas
using the locked Project File option. This
locked option means that this geodata can be used in the atlas (viewed,
manipulated, queried, ?) but the digital geodata can not be accessed by any
other TNT product for export,
modification, or further analysis except by the single, specific TNTmips
software authorization key that created the atlas. Some
additional details on this project can be found at www.exprodat.com/products/mill_atlas.htm.
More information will be provided here about this TNTatlas
after its official release.
Australia
Farming Practices Book to Include TNTlite.
A book on spatial
information management for farmers is being considered by CSIRO Landlinks
Press (www.landlinks.com). It
would include TNTlite on a CD and
be targeted toward technical training organizations especially those
conducting distributed education courses, national farm information groups,
rural merchandise stores, and related agricultural groups.
The CD would provide sample data sets for practice use in TNTlite
covering local applications of GIS, GPS, and remote sensing to Australian
farms. These sample datasets are
already prepared and if the book is approved for publication, it could appear
as early as August.
Maylasia GIS
Workbook using TNTlite.
A workbook
entitled GIS: Satu Pendekatan Praktical, translating roughly as
Practical Exercises in GIS, has been written in Malaysian by Dr. Mui-How Phua
of the faculty of the University
of Malaysia at Sabah
(Ph.D. in forestry, University
of Tokyo).
This workbook is currently in press.
Dr. Phua can be contacted at pmh@ums.edu.my.
Late?
Not really.
The official
release of RV6.9 of the TNT
products was made via microimages.com on 31
December 2003.
At that time any TNT
professional client who had ordered a license to use RV6.9
could download, install, and immediately begin to use it.
If you had not ordered RV6.9
before that release date or subsequently ordered it you needed an activation
code from MicroImages to permit you to use it with your software authorization
key.
Since that date
MicroImages has duplicated the CDs containing the official RV6.9,
prepared this MEMO and other supporting materials, and is now supplying
herewith your complete release kit. These
activities could not be completed until RV6.9
was completed for official release and made available for downloading,
duplication, and these subsequent operations.
Responding to
error reports from those who have downloaded V6.9,
a complete patched version of PV6.9
has been substituted weekly for any new downloads of V6.9.
It would be meaningless to have you download the 31 December RV6.9
and then have to immediately apply the latest patch to it.
As a result, anyone who has downloaded any full or minimal V6.9
should not install the 31
December 2003 official release
RV6.9 on this CD.
Also please remember that the latest patch available from
microimages.com this week for your RV6.9
or PV6.9 TNT product is
comprehensive, inclusive, and can be applied to any earlier V6.9.
NOTE:
If you have downloaded and installed a V6.9 or a patch since 31
December 2003, do not replace
it with the older RV6.9 on the
enclosed CD.
The time between
the official release via microimages.com and shipment of this hardcopy
material has been somewhat longer than for previous releases.
However, many software manufacturers are gradually shifting to first
releasing their product electronically. Weeks
or months later it can be ordered or bought in the store on a CD with its
useless online help and without any printed materials.
Finally, at a much later date, you can select and buy a hardcover
reference book updated for that version.
For example, the most frustrating retrogressive computer experience I
have had recently in this cycle was to need some information on Microsoft Word
for Windows and having to go out and buy an expensive reference book for an
earlier version since a book for the current version was not available.
Unfortunately
you will need to get used to the idea of downloading new versions of most
software and also to applying periodic patches.
This poses a problem for those who do not have any access or only modem
access to the Internet. However,
all the industry is moving forward and these changes are the practical reality
of working with the latest software products.
Those who have an older computer, use Windows 95 or W98, lack a $35 DVD
reader, have access to the Internet via modem, or are working under similar
handicaps must reconcile themselves to getting things accomplished at a slower
place on a computer where time must be interchanged with money.
How much of your time you wish to exchange for money by working with
less than optimal equipment and Internet access is something for you to
decide.
About
1
January 2004 MicroImages?
software engineers began working on the new features in the Development
Version of 7.0 (called DV7.0 until
its official release date). As a
result there are now several interesting new features already completed.
| ?
DataTips |
Add text styling and frame fill colors to substantially improve their
appearance. |
| ?
GraphTips |
Create graphical DataTips that will draw in (which means, pop in)
graphical information |
| ?
Manifolds |
Display 2D surfaces called manifolds in 3D views (for example, curved
geologic profiles). |
| ?
Faster
3D Views |
Faster variable triangulation of terrain surfaces is being added to 3D
views. |
| ?
JPEG2000 |
The latest
JPEG2000 library is being integrated. |
| ?
OpenGIS Coordinates |
ISO Standard 19111:2003 Reference Coordinate Specifications supported by
the OpenGIS Consortium are being supported. |
| ?
OpenGIS WMS |
TNTserver is being expanded to an OpenGIS Consortium Cascading Web
Map Server. |
| ?
SVG layouts |
Rendering to SVG adds interactive measurement tools and
multiline, styled
DataTips. |
If
you wish to experiment with and use these in your advanced projects you can
download DV7.0 now. Since
these are prototype features, you may end up helping MicroImages debug them,
but you also have the opportunity to contribute suggestions for possible
improvements in them while they are receiving active attention and become part
of the official release of RV7.0.
It is often easier for a software engineer to respond to your
suggestions when they are actively working on a project rather than when
he/she has been given a priority on a new project for a new release.
As all these and
other new other new features are added to DV7.0
new color plates will be posted on microimages.com.
Until such plates appear, it is unlikely that any other additional
written information on them will be available other than the introductory
notes in this memo. Of course, as
always, our software engineers are willing to help you if you have a specific
question.
Testing
and Retesting.
It is only human
nature to criticize software faults and question why its manufacturer did not
test it more thoroughly. On the
other hand, it is naive to conclude that commercial software manufacturers
from Microsoft on down ignore this aspect of software development.
Those of us who pay for the development of commercial software products
learn pretty fast that solving an error after a product is delivered costs 100
times more than finding and solving it on the originating programmer?s desk.
It is logical to
wonder at this point why MicroImages software development engineers, most of
whom each have at least 15 years experience writing the TNT
code, simply can not do so without errors.
That, of course, is not a realistic expectation as this is not a
project of a single careful individual. Complex
software is developed by many engineers on a day by day basis and evolves into
place. It evolves as a very
large, interrelated entity. Even
perfect daily contributions from each team member can negatively impact on
today?s work of someone else in the adjacent office or on code written years
ago?all without any immediate manifestations.
Let?s put the
overall situation in context. Many
of you who communicated with MicroImages last year requested 1 new feature or
improvement to be added to the TNT
products. MicroImages encourages
you to do this as these are treated as serious inputs to focus upon in the
development of our products. In
many cases the requested feature is of high priority to you and your project.
A few of you then separately indicated that you wish MicroImages would
not worry so much about adding new features and concentrate on making the
products error proof. So what we
have here is a conundrum. We
receive hundreds of requests for new, useful, and interesting features, many
of which are individually highly important to a specific client.
Paralleling this is the separate recommendation that MicroImages stop
adding new features and extensively test each product by following out its
thousands of procedural threads. Obviously,
all these requirements can not be met. Extensive
time spent testing a complex, large software product can make it more
reliable, but as soon as it is altered just to correct an error or add a
feature to meet your individual requests, which are often excellent, all of
those testing results are suspect and should be repeated.
So what to do?
You can not halt forward progress and stay in business!
Humans can not and will not do the repetitious testing required over
and over! Software implementation
causes errors many of which ripple out into unexpected areas!
To address these, the strategy gradually instituted at MicroImages has
been to implement as many daily automatic testing procedures as possible.
Why do it daily? Because
all the MicroImages products must be rebuilt overnight on every supported
platform so that each day?s changes by all contributors are integrated
together. This is then the basis
for providing you with weekly integrated and comprehensive patches for the
current version should you need them.
These are
examples of the groups of nightly tests being performed with direct error
reports sent to the responsible software engineer?s desk.
-
A large
collection of complex map layouts are built and compared with the correct,
stored output raster. A
single complex layout can use many different object types and procedures.
Layouts using non-English features are now being collected and
added to this test set to help to track non-English text errors.
-
The library of
all SML scripts is tested to see if they parse.
-
The SML
scripts which can be automatically run without human input are run and
checked nightly.
-
A collection
of complex vector objects are validated.
Some TNT
processes, such as the Spatial Data Editor, are highly interactive with their
operator. Furthermore, they
provide many different interactive tools, which can be applied in a myriad of
different orders. Since the
Editor is difficult to automatically test, the first step was to provide it
with good backup procedures in V6.7
and V6.8.
Now automated tests of various vector operations are being devised to
monitor these activities embedded in the Editor and in related TNT
processes. Fortunately, other
?batch? oriented testing such as map layouts and vector validation also
check these important operations in these products.
At the moment the
nightly testing is being expanded to handle 3 versions at once: the previous
release (latest PV6.8), the
current release (latest PV6.9),
and DV7.0.
These have to be tested on various platforms (Windows, Mac ? 32-bit,
64-bit and so on). Recently most
staff computers have been equipped with Purify, an expensive IBM product that
must be purchased individually for each machine to build debugging versions of
all these TNT processes.
This enables processes with built in debugging to be used the next day
by the development and support engineers to help track down our problems.
Unfortunately, processes built with Purify take 10 to 20 times longer
to build and then equally long to run so all processes have to be built with
and without Purify.
All of this,
together with the nightly builds of all these products, requires extensive
overnight computer processing resources.
As a result, considerable recent effort has been invested in writing a
network management system to use many of the MicroImages computers for an
overnight distributed processing system for these builds and tests.
Now all our staff desktops and many other computers are left on at
night for this purpose. This
central management service sends out the next test to each available machine
that reports in as ready and suitable for a task.
The individual machine then finds the appropriate test software, TNT
process(es), and test data somewhere on the network, runs the task, and
reports back the results to the proper destination such as the software
engineer responsible for that program. This
in turn puts a lot of demand for bandwidth on the intranet, which has to be
upgraded to handle these requirements.
These activities
are reviewed here to assure you that errors, preventing them, and getting you
timely patches are important to MicroImages. What
is necessary is to develop a scheme that can monitor software health and
quality while new features are being added.
Please also understand that this is accomplished at considerable
expense in terms of human capital, delays, and fewer new features.
Please also understand that priorities assigned to the new feature you
request are directly under my control. As
a result I make daily decisions with regard to which new features will do the
most for the most existing users and enable our products to remain competitive
while also allocating the same human capital to error management and
approaches to reduce errors.
Getting
Support.
At this time all
of MicroImages? software support engineers hold formal university
degrees in computer
science. Thus you are
communicating directly, free-of-charge with a computer professional who knows
about software, programming, networks, systems, and the operation TNT
products, but is not a geospatial analyst or professional in some application
area.
Gradually, over
the years, the technical range and complexity of geospatial analysis and,
thus, the TNT professional
products has increased. As a
result, your questions have been requiring more and more technical expertise
to answer. A good example is the
continuing increasing proportion of your questions related to your use of SML,
TNTserver setup, TNTsdk,
interfaces to Visual Basic and so on. These
are questions best addressed by computer science professionals and you are
being provided with that kind of support. Correspondingly,
more geospatial technicians are being graduated or self taught and employed as
this professional activity expands to mainstream.
This also increases the complexity of operational oriented TNT
questions and correspondingly decreases the number of your application
questions.
Please also
remember that, by policy, MicroImages does not do applications in competition
with you, we do the software you use. Thus,
MicroImages is generally staffed by computer specialists who are not trained
as you are in the application of geospatial analysis.
They are more than happy to receive your application questions on how
to go about applying a TNT product
to a given task or series of tasks from a technical viewpoint. However,
they can not readily answer your questions about project design in your or
your client?s professional area of expertise (for example, city and
transportation planning, archaeology, mineral exploration, meteorology,
coastal studies, and so on).
We occasionally
get inquires from clients along the line of ?how do I apply my TNT
product to this proposed river management project,? or to ?a watershed
management project,? or ?to compete for this city infrastructure
management GIS system,? and so on. MicroImages
support engineers can not help you at all in these areas and refer them to
others here. Obviously those of
us who are experienced in geospatial project design are going to need quite a
bit of information and adequate time to discuss and to answer these kinds of
questions.
Another kind
request is to ?provide information on who else is using a TNT
product? in a specific application area.
A good example of this was the request today of a potential client in
the U. S. Navy asking who he could talk to about using TNTmips
for extracting a DEM from underwater stereo photographs.
Fortunately, we did know of a U.S. client in Florida who has been
experimenting with this for years. While
these requests are reasonable the principal problem in answering them is that
MicroImages does not know what 90% of our clients are specifically doing with
our products. As a result, this
kind of question usually can not be satisfactorily answered.
Our clients are scattered around the world in a hundred nations.
They are uncomfortable or cannot communicate with us in English and
only work directly with our Resellers. About
all we know is the general area of their organization?s activity (for
example, agriculture, forestry, map making, and so on) from their registration
form.
When you contact
MicroImages? software support engineers for help, you will always get
answers even though initially they may not be what you want as they may not be
able to understand your inquiry, reproduce your problem, or make decisions
about what you want added to the TNT
products. Many problems require a
back and forth dialog before they can be understood and the proper solution
proposed.
Fortunately,
MicroImages does have some very experienced Resellers, Geospatial Consultants,
and professional clients who understand geospatial applications in various
professional fields and how to use the TNT
products in them. They can help
you with project design and training in our own language and local setting.
However, please expect to pay them as appropriate for their efforts.
Searching
for Information.
One simple
MicroImages resource that I feel could help many of you is the use of Google to
perform searches restricted to microimages.com for a topic of current interest,
to research various TNT features, or
to simply figure out how to do something. All
you have to do is ?let your fingers do the talking and the walking? by
pushing the SEARCH button at www.microimages.com/search/.
This brings you up the familiar Google Advanced Search form, which is
preset to restrict your search to only the material they have cataloged for
microimages.com.
MicroImages has
carefully insured that the text content of every one of the 76 Tutorials and
Reference Booklets, the text in every one of the approximately 500+ color
plates, the reference manual, and the content of every one of these 54
MEMOS is on microimages.com in a form that is completely indexed by Google. Linked
to, or embedded in the PDFs of these text materials are all the associated color
illustrations. This same procedure
is also followed for every new color plate added during the next development of
the next version (for example, DV7.0)
and any new or updated tutorials are also immediately posted, sometimes even in
incomplete form.
All these materials
provide you with at least 10,000 pages of text and many thousands of associated
illustrations at your finger tips. The
only caution is that some of these materials are older, but even then are useful
to clarify a point as long as the assumption is not made that the TNT
products still implement the concept or application, especially the user
interface, in exactly that form.
Typically Google
crawls and reindexes well over 1 gigabyte of text material on microimages.com at
least once a month. Therefore, on
the average your searches are up-to-date to within 2 weeks of the latest written
materials released via microimages.com. For
example, it is likely that the contents of this MEMO will already be indexed by
Google for searching by the time you read this?so try a search on some unique
phase herein.
I use this Google
site search daily as I think of this collection of materials as my large filing
system of all the permanent TNT
product formal reference materials. During
the writing of this MEMO, I consulted it very frequently to check a color plate
or tutorial to review, and sometimes understand what had been accomplished in RV6.9.
Often 2 well chosen words, when restricted to microimages.com in this
site search, provide me a reasonably short list of references on the topic of
interest. I can then locate the
specific item and use the Google link to review it.
I often have a good
idea of just which 2 or 3 words to try since I am quite familiar with the
reasonably standardized terminology used in these written materials.
This should not discourage you, as it only means I get to what I want
sooner. With a little practice and
thought, you will also get familiar with how things are expressed and can be
located. All these documents are written as technical reference materials.
They usually employ carefully selected terminology and repeat the same
term over and over to avoid possible confusion in your understanding of what is
meant. And, of course, you can
always ask the software support engineers for the appropriate search terms,
which will often lead you to more information on your topic than they could
directly provide you, especially the illustrated materials.
A good source of
the terminology used in these written materials is the new reference booklet
released with RV6.9 entitled Glossary
for Geospatial Science. It will
be of special assistance to help non-native speakers of English select search
terms and phases. Over the next
couple of months MicroImages will investigate how we can assist you further in
choosing what you enter into this Google site search to retrieve the references
on your topic of interest.
For example,
perhaps this glossary with additions can be added to this search area on
microimages.com to assist you in a more direct fashion in entering your search
terms. Often you can also remember
something about how a feature or concept was previously presented in our written
material, such as in a color plate. It
may be possible to further restrict your site search to specific groups of
materials, such as only color plates, only tutorials, only these release MEMOs,
and so on. It may not be possible
to confine your search to a date range since periodically these materials are
rearranged or revised and, thus, Google assigns a new and current date to them.
Watch this search area of microimages.com for these developments and come
back with any ideas you have, keeping in mind that we can not readily change or
humanly cross-reference over 10,000 pages.
nVidia
Fights Back. by Dave Salvator.
PC Magazine. June
30, 2003.
page 38.
This review
compares the performance of the most recent nVidia display board (GeForce FX
5900 Ultra) with the most recent ATI board (Radeon 9800 Pro).
Fresh
Ideas in Databases. by Bill
Machrone. PC Magazine.
February 12, 2002.
page 57.
While this
editorial is a bit dated, it provides some useful insight into how and why
database software is evolving.
Tough
Choices: Ruggedized Notebooks. by
Jim Engelhardt. Geospatial
Solutions. May 2003.
pages 42-43.
?It?s
a rough world out there. To protect
your hardware and data investments, durable notebook computers are essential for
withstanding the harsh environments encountered during mobile mapping and GIS
data collection.?
Low Cost GPS.
MicroImages
periodically receives questions about which GPS system to use.
RV6.9 provides a simple
procedure for orthoimage production from IKONOS and QuickBird images supplied
with Rational Polynomial Coefficients (RPCs).
As a result you may become interested in determining which GPS device and
approach to use for collecting Ground Control Points (GCPs) of suitable accuracy
for this satellite image rectification. MicroImages
as a software company has little field experience with these devices and the
correct solution and the accuracy that can be achieved in your area varies
widely around the world. These
questions are best answered by those who have experience in collecting field
data in your area of interest.
If you are
interested in low-cost GPS units that are convenient to use with TNTatlas
on Tablet PCs or similar applications please review the following sites and
devices. Whether or not they
produce adequate accuracy for your application is a matter for you to determine.
If good supporting material is available (which means, high resolution
airphotos for marking GPS point locations in the field) and a large number of
GCPs are collected with access to the Wide Area Augmentation System (WAAS)
correction signals, these or similar GPS devices could be used to collect the
control points for this TNT
rectification procedure. On the
other hand, fewer GCPs are required if survey accuracy GCPs are collected and
can be very accurately located in the satellite image (for example, use high
resolution airphotos for locating points in the field).
DeLorme.
The
Earthmate GPS Receiver from DeLorme at www.delorme.com is a $130 unit the size
of a matchbox and has a USB cable interface or can be adapted for wireless using
Bluetooth. With either interface,
it can be mounted on a vehicle roof or atop a staff to get it above your body
and nearby obstructions thus ?seeing? more satellites.
It can also provide improved accuracy by acquiring the WAAS satellite?s
GPS position correction signals for the United
States including
Alaska, most
of Mexico, and
southern Canada (www.nstb.tc.faa.gov/vpl.html).
The European equivalent of WAAS, called European Geostationary Navigation
Overlay Service (EGNOS), is scheduled to become available this year.
PHAROS Science
and Applications, Inc.
The
PHAROS USB Connected GPS Receiver and the GPS Receiver with Bluetooth Wireless
Technology at www.pharosgps.com are quite similar devices in size and
functionality to the DeLorme products outlined above.
MrSID ?
LizardTech.
LizardTech,
Inc. To be Acquired by Celartem Technology USA, Inc.
Directions Magazine. Press Release.
20 June 2003.
For details see
www.directionsmag.com/press.releases/index.php?duty=Show&id=7328.
Some
highlights:
?LizardTech,
Inc. employs 29 people and is being acquired for US$11.25 million in cash.
[at its peak LizardTech employed 150 staff and various sources indicate
that LizardTech has had US$45 to $50 million in outside capital over the past 11
years.? [for example see
seattlepi.nwsource.com/business/liza20.shtml]
?Celartem
Technology USA, Inc. is wholly owned by Japan?s Celartem Technology, Inc.
which provides digital image and secure content distribution solutions.
The addition of LizardTech?s core technology and applications further
Celartem?s vision to consolidate and develop technology to simplify and
enhance the creation, management, control and distribution of digital
content.?
BREAKING
NEWS: Mapping Science, Inc.
no longer exists.
GeoJP2 - Mapping
Science.
Effective
12 February the URL link to www.mappingscience.com points to a web page being
maintained by LizardTech as follows.
?By
now you?ve probably heard that LizardTech has obtained the assets of Mapping
Science, Inc. This action is a
result of a settlement in LizardTech?s ongoing lawsuit for claims against
Mapping Science for misappropriation of trade secrets.
Mapping Science has ceased operations and LizardTech will support Mapping
Science customers going forward.?
The
page continues on with additional information related to this topic.
Additional
rumors on this topic can be found at www.lbszone.com/features/
lizardtech_msi/ a portion of which are as follows.
?The
?skinny? on the suit. LizardTech entered into litigation as a result of six
former LizardTech employees who founded Mapping Science. The findings of the
settlement established that there existed a misappropriation of trade secrets
and breach of NDA's the employees had with LizardTech. Mapping Science will
cease to exist as a company and all of its assets will be turned over to
LizardTech. During the transition it is expected that some MS employees will be
retained as consultants by LizardTech in order to facilitate a
"smooth" transition and to continue supporting customers that may be
affected. Documentation (such as the GeoJP2? file specification) and other
material that may have been previously available for download/access via Mapping
Science's website may not be available in the short-term, however, information
can be obtained by contacting GeoJP2fileformat@lizardtech.com. Developers and
software vendors will be pleased to know that LizardTech is currently working on
a software development kit that will enable support for MrSID® and JPEG 2000,
within a single SDK... stay tuned!?
LizardTech?s
official statement on this topic can be read at www.lizardtech.com/
solutions/ms/msfaq.php.
Introduction.
Your RV6.9
kit contains a DVD providing Global Reference Data in RVC format and illustrated
in synoptic form on the attached color plate entitled Global Data Sets.
These data sets were derived from the Visible Earth, VMap0, and GTOPO30
data, each of which is described in more detail below. This new geodata set is
provided as a bonus feature for those who have purchased RV6.9
or later of their TNT product.
As a result the Project Files provided on this DVD will not work
in any earlier version of a TNT
product.
The global geodata
provided on this DVD originated in the public domain as reviewed below.
Since these digital geodata sets were originally created and distributed
in the public domain, MicroImages hereby declares that they remain in the public
domain including this DVD and products derived from the geodata it contains.
You are free to export and use the geodata on this DVD in any way you
desire without restrictions.
Disclaimer:
Since MicroImages did not create this data, it has no
responsibility for the accuracy of any of the geodata on this DVD.
Each
of the three data sets included on the DVD (World 1 km Color Image, World
1:1,000,000 Map Features, and World 1 km Elevation) have been converted to
single global layers georeferenced to latitude/longitude, centered on the
Greenwich meridian. During
this conversion, each original data set was subjected to extensive TNT
processing into the appropriate TNT
object types.
Image and
Elevation. The image and
elevation data were assembled and mosaicked to a common cell size, pyramided,
and JPEG2000 compressed into 2 matching raster objects.
Map
Features. The VMAP0 feature
data started out as a ridiculous 129,588 files organized into 23,696 folders.
It was reorganized into 10 TNT
Project Files each containing several global vector objects of a common theme
(for example, various hydrographic layers all in 1 Project File, boundaries in
another Project File, transportation, elevation, industry, physiography,
population, utilities, vegetation, and data quality in others).
During this conversion extensive improvement and reorganization of these
originally unmanageable data structures were performed using various TNT
processes. These included:
-
filtering out
extraneous elements (for example, removing grid lines from feature layers),
-
complete
reorganization of the attributes,
-
improved
styling,
-
merging all
spatial coverage tiles of the globe by theme, and
-
optimizing each vector object for faster access.
The effects of some
of these merging and cleaning operations are illustrated in the attached color
plate entitled Making a Global Data Set from VMap0. As a result of these
operations, the vector objects provided in this set have the same spatial detail
as their original sources, and they are vastly improved in ease of use over
their original form.
Data
Sets.
This global geodata will provide you with an excellent starting point for
any continent, nation, or province scale project.
It provides a single MODIS color image of the entire earth, which can be
viewed at any scale from a full view to a 1:1 view in 1 or 2 seconds. The
GTOPO30 elevation data for the globe?s land area has been provided as a single
raster object. The 1:1,000,000 map
features and attributes provided originally as the Digital Chart of the World
and later as a VMAP0 product have now been reduced to 10 Project Files each with
a few simple consistent theme layers each as a single global vector object.
If you are zoomed in somewhat these optimized vector layers can be
overlaid on the MODIS image in a few seconds (even the most spatially complex
theme layers such as hydrology and water bodies). This
is vastly more convenient than dealing with the approximately 24,000 folders and
130,000 files used for the original form of this data.
Each of the raster
and vector objects on this DVD can be viewed individually using TNTatlas
or any TNT professional product.
They can also be viewed in combination, extracted from, and further
edited and analyzed using the appropriate TNT
product. As noted above, with its
pyramided structure, the World 1 km Color Image displays in seconds at full
view. However, some of the World
1:1,000,000 Map Features vector objects (for example, global hydrology) have hundreds
of thousands of elements and may require 1 to 2 minutes for display at
full view. This, however, is an
incorrect use of any of these layers as it will fill the continents with a
multitude of points, lines, and/or polygons, which produce a meaningless display
because no individual elements can be distinguished.
For faster display times and individually discernable elements, it is
recommended that you first display the World Color Image, zoom up on your area
of interest, and then add to the view the World Map Features layers of interest
to you. Under these circumstances TNT?s
optimized vector structure will provide fast overlay of any of the vector layers
in just seconds. When you use the
global layout provided on the DVD (WorldAtlas in the WorldAtlas folder), these
map feature layers are automatically added at appropriate scales.
Scale control has also been included for the labels in these map feature
layers and is illustrated in the attached color plate entitled Global Data
Set with Scale Controlled Labels.
MicroImages hopes
you will find a use for this geodata and use it to build and demonstrate
projects that benefit from these superior features of the geodata management in
the TNT products.
Geodata
Descriptions.
World Image (Blue
Marble: Land Surface, Shallow Water, and Shaded Topography).
Source.
The global image
was originally prepared from MODIS images at the NASA
Goddard Space
Flight Center
by Reto Stöckli (land surface,
shallow water, clouds). Enhancements by Robert Simmon (ocean color, compositing,
3D globes, animation). Data and technical support: MODIS Land Group; MODIS
Science Data Support Team; MODIS Atmosphere Group; MODIS Ocean Group.
Additional data: USGS
EROS Data
Center (topography);
USGS Terrestrial
Remote Sensing
Flagstaff Field
Center (Antarctica
); Defense Meteorological Satellite
Program (city lights). See the
metadata included with the Visible Earth image in RVC format for additional
credits and web addresses.
Preparation.
These NASA MODIS
color images of the globe were the starting point for preparation of the World 1
km Color Image, which has been converted from its original TIFF format by import
to TNTmips, mosaicked, georeferenced,
and lossless compressed via JPEG2000 into a single raster object in a Project
File. The resulting raster object
has 21,600 lines and 43,200 columns with a ground cell size of approximately one
kilometer. The uncompressed RVC
file is 2.78 GB, and the RVC file using embedded JPEG2000 compression is 484 MB.
World Map Features
(VMap0: 1:1,000,000 Maps).
Source.
Vector Map Level 0
(VMap0) is an updated and improved
version of the Digital Chart of the World (DCW®) prepared and
released by the National Geospatial-Intelligence Agency (NGA, formerly National
Image and Map Agency or NIMA). The
VMap Level 0 database provides worldwide coverage of vector-based geospatial
data. It consists of geographic,
attribute, and textual data. The
primary source for the database is the 1:1,000,000 scale Operational Navigation
Chart (ONC) series co-produced by the military mapping authorities of
Australia, Canada, the United Kingdom, and the
United States.
The original VMap0 data is organized into four libraries of many, many
files covering 10 themes. These
libraries are Europe
and Northern Asia
(EurNAs), North America
(NoAmer), South America
and Africa
(SAmAfr), and Southern Asia
and Australia
(SAsAus).
Preparation.
The four VMap0
continental libraries and theme structure is inconvenient for use in the TNT
products. MicroImages has merged
each of the themes into a single, global vector object that includes all
available data for the world at the time the data was obtained.
Note that the VMap0 data available to MicroImages did not contain all
themes or layers for all areas (which means, all libraries). For
example, the Vegetation theme covers North America
only and the Miscellaneous Hydrography
Lines layer was not present for North America
.
If coverage of part of the world is missing from the VMap0 source data,
it is noted in the list of themes and objects below.
After merging, the
vector objects were further modified as indicated in the list below.
The step referred to as ?database simplified? includes the removal of
ID fields in the imported table and deletion of duplicate records, which have no
use in TNT products.
?Filtered? indicates removal of excess nodes; dissolving adjacent,
identically attributed polygons into one; or both.
The effects of these and other cleanup operations are illustrated in the
attached color plate entitled Making a Global Data Set from VMap0.
As a result of these modifications and to avoid confusion with the
original VMap0 data, the vector objects provided are referred to as World
1:1,000,000 Map Features.
The accuracy of the
standard attribute tables provided for all planar and polygonal topology objects
is limited by the Latitude/Longitude georeference.
Because all layers for each theme were imported together, polygonal
topology was initially assigned to all. Many
of the layers, however, had line attributes only, and these were then converted
to planar topology, which saves about 15% in object size and displays
significantly faster. Objects
containing points only have no corresponding benefits from changing topology
type, so their polygonal topology was not changed.
(For information on the properties of, and differences between, the
various topology types, such as polygonal
and planar, see the color plates
included on the DVD entitled VTopoTypes and TopoBehavior.)
VMap0 in its original VPF format is 1,805,094,587 bytes with 129,588
files in 23,696 folders. VMap0 in
MicroImages? RVC Project File format is 1,862,676,480 bytes with 10 files and
does not use any folders.
Detailed
Inventory.
Following is a list
of themes and layers in the World 1:1,000,000 Map Features provided at a total
of 2.36 GB (larger than listed above because an additional layer that merges
World_inwatera and World_watercrsl layers from the Hydrography theme has been
added):
bnd:
boundaries (116 MB)
World_coastl:
Coastlines, planar, database
simplified (36,351à11
records), DataTip is exdesc (Definite, Indefin)
World_depthl:
Depth Contours, planar, DataTip is
crv (numeric values), database simplified (48997à7
records), filtered
World_polbnda:
Administrative Areas, polygonal,
DataTip shows state/country name, filtered, database simplified
World_polbndl:
Political Boundary Lines, planar,
DataTip shows UseDesc (International, Primary/1st Order), may want
F_CodeDesc (Administrative Boundary, Claim Line, Armistice Line), filtered and
edited, database simplified
World_oceansea:
Oceans/Seas, polygonal, DataTip
shows ocean/sea name, filtered and edited, database simplified
World_polbndp:
Political Boundary Point Features, polygonal,
DataTip shows country code, database not simplified
World_barrierl:
Barrier line features, EurNAs, SAsAus only, planar,
rest left as is (157 lines and records, would be just 1 record if simplified),
DataTip is line length (all lines are walls)
World_bndtxt:
Boundary Coverage Text, polygonal,
DataTip is nam
dq:
Data Quality (5.85 MB)
World_dqarea:
Data Quality Areas, polygonal,
DataTip is comp_date, database simplified (4850à314
records)
World_dqline:
Data Quality Lines, planar, DataTip
is length
World_dqtxt:
Data Quality Text, polygonal,
DataTip is textstring but many won?t make sense because the complete DataTip
is split over multiple records
elev:
Elevation (695 MB)
countourl:
Contour Lines, planar, filtered,
database simplified (1,099,573à419
records), DataTip shows ZV2
elevp:
Spot Elevations, polygonal, DataTip
shows ZV2
hydro:
Hydrography (573 MB)
World_watrcrsl:
Water Courses, planar, filtered,
database simplified (959,816à4
records), DataTip is perennial/intermittent, lines appear highly broken because
polygonal water features are not part of this layer but are found in inwatera
World_inwatera:
Inland Water Areas, polygonal,
database simplified (339965à5
records [1 is blank]), DataTip is perennial/intermittent
World_aquecanl:
Aqueducts/Canals/Flumes/Penstocks, planar,
database simplified, (3059à7),
DataTip shows above/below ground surface
World_dangerp:
Danger Points, polygonal, DataTip is
rock or wreck
World_hydrotxt:
Hydrography Coverage Text, polygonal,
DataTip is textstring
World_miscl:
EurNAs, SAmAfr, SAsAus only, planar,
database simplified (190à3
records), filtered, DataTip is F_codedesc (seawall, dam/weir, breakwater/groyne)
World_miscp:
Miscellaneous Hydrography Points, polygonal,
DataTip is f_codedesc (island, waterfall, dam/weir?), labels on at 1:400,000
World_hydrology:
Water Courses and Inland Water Areas, polygonal,
merged World_watercrsl and World_inwatera, then further modified to transfer
polygon attributes to lines so that lines only could be selected for display to
increase speed, DataTip is perennial/intermittent, perennial/permanent on at
7,000,000, non-perennial/intermittent on at 2,500,000.
Because all elements have map scale controlled display, it will likely
show no elements at full view even when added to a group by itself.
You must be zoomed in farther
than 1:7,000,000 for any elements to show.
Other themes have their map scale control designated in the layout so it
does not apply when viewed separately.
ind
: Industry (3.16 MB)
World_storagep:
Storage Point Features, polygonal,
DataTip is Tank/Depot/Water Tower
World_misindp:
Miscellaneous Industry Point Features, polygonal,
DataTip is f_codedesc [Oil/Gas Facility, Processing/Treatment Plant, Tower
(Non-communication)]
World_indtxt:
Industry Coverage Text, polygonal,
DataTip is textstring
World_fishinda:
Fish Hatcheries/Farms, EurNAs, SAsAus only, polygonal,
DataTip is f_codedesc
World_extractp:
Extraction Point Features, polygonal,
DataTip is Mine/Quarry, Well, Salt Evaporator, Oil/Gas Field
World_extracta:
Extraction Areas, polygonal,
database simplified (1171à4
records) filtered
phys:
Physiography (62.1 MB)
World_grounda:
Ground Surface Areas, polygonal,
DataTip is smcdesc (Sand, Distorted Surface, Lava)
Landforml: SAsAus only, planar,
DataTip is f_codedesc (Ice Cliff, Bluff/Cliff/Escarpment)
World_landicea:
Land Ice Area, polygonal, DataTip is
f_codedesc (Snow Field/Ice Field), edited (grid lines removed)
World_phystxt:
Physiography Coverage Text, polygonal,
DataTip is textstring
World_seaicea:
Sea Ice Areas, grid lines removed, polygonal,
DataTip is f_codedesc (Polar Ice, Pack Ice)
pop:
Population (41.3 MB)
World_poptxt:
Population Coverage Text, polygonal,
DataTip is textstring (small villages, numerous graves, numerous buildings?),
labels on at 5,000,000
World_mispopp:
Miscellaneous Population Points, polygonal,
database simplified (201,520à
64,765 records), DataTip is txt (names or descriptions of buildings and
settlements)
World_mispopa:
Miscellaneous Population Areas, polygonal,
DataTip is f_codedesc (Native Settlement or Military Base), labels on at
3,000,000
World_builtupp:
Built-Up Area Points, polygonal,
DataTip shows name (unk for many), use transfer attributes for NoAmer (from
builtupa) and merge again, database union, database simplified (148à126
records), labels on at 5,000,000
World_builtupa:
Built-Up Areas, polygonal, DataTip
shows name field
trans:
Transportation (203 MB)
World_aerofacp:
Airfield Facilities Points, polygonal,
DataTip shows facility name
World_dqline:
Line Data Quality, planar, DataTip
is f_codedesc (bridge, power, telephone, railroad?)
World_mistranl:
Miscellaneous Transportation Lines, planar,
DataTip is f_codedesc (Aerial Cableway Line/Ski Lift Line, Pier/Wharf/Quay)
World_railrdl:
Railroads, planar, filtered,
(database simplified (159980 recordsà13),
DataTip shows single/multiple tracks
World_roadl:
Roads, planar, filtered, database
simplified (571334 recordsà42),
DataTip shows primary/secondary route (names not present)
World_rryardp:
Railroad Yard Points, polygonal,
DataTip is f_desccode (Railroad Yard/Marshalling Yard), labels on at 3,000,000.
World_traill:
Trails and Tracks, planar, filtered,
DataTip shows length, database simplified (53339à4)
World_transtrc:
Transportation Structure Points, polygonal,
DataTip is f_codedesc (Bridge/Overpass/Viaduct, Causeway, Ferry Crossing, Ford,
Tunnel)
World_transtrl:
Transportation Structure Lines (tunnels, bridges?), planar,
filtered, database simplified (4496à10)
World_transtxt:
Transportation Coverage Text, polygonal,
DataTip is textstring
utils:
Utilities (32.8 MB)
World_dqline:
Data Quality Line Features, planar,
DataTip tells type of utility
World_pipel:
Pipelines, planar, filtered, DataTip
tells whether above or below ground, database simplified (7488 recordsà2)
World_utill:
Power Transmission/Telephone/Telegraph, planar,
DataTip is length (km), database simplified (106,130à3,
1 blank)
World_utilp:
Utility Point Features, polygonal,
DataTip is f_codedesc ( Communication
Building or Tower, Power Plant, Pumping
Station, Substation/Transformer)
World_utiltxt:
Utility Coverage Text, polygonal,
DataTip is txtstring
veg:
Vegetation NoAmer only (41.8 MB)
cropa:
Crop Areas, Canada only, polygonal,
DataTip is f_codedesc (Cropland), grid line removed
grassa:
Grass Areas, polygonal, DataTip is
f_codedesc (Grassland, Scrub/Brush)
tundra:
Tundra, polygonal, DataTip is
f_codedesc (tundra)
treesa:
Trees, polygonal, DataTip is vegdesc
(Deciduous, Evergreen, Mixed, Other)
swampa:
Swamp/Marsh, polygonal, DataTip is
f_codedesc (Marsh/Swamp)
World Elevation.
(GTOPO30)
Source.
GTOPO30 is a global
Digital Elevation Model (DEM) resulting from a collaborative effort over a three
year period led by the US Geological Survey?s EROS
Data Center
in Sioux Falls,
SD. Source
data or funding was contributed by NASA, the United Nations Environment
Programme/Global Resource Information Database (UNEP/GRID), the US Agency for
International Development (USAID), the Instituto Nacional de Estadistica
Geografica e Informatica (INEGI) of Mexico, the Geographical Survey Institute (GSI)
of Japan, Manaaki Whenua Landcare Research of
New Zealand, and the Scientific Committee on
Antarctic Research (SCAR). GTOPO30
was derived from a variety of raster and vector sources (see the illustration at
http://edcdaac.usgs.gov/gtopo30/ source_img.html).
The original data set was completed in late 1996.
This global topological data has a horizontal grid spacing of 30 arc
seconds (approximately 1 km).
Preparation.
The World 1 km
Elevation provided on this DVD was imported from EROS? GTOPO30 DEM format into
MicroImages? RVC Project File format as 33 16-bit signed integer rasters that
were mosaicked into a single raster object.
A color map was subsequently selected and lossless JPEG2000 compression
applied. The original mosaic in RVC
format was 1.85 GB. The RVC file
with embedded JPEG2000 compression and a null area mask is 160 MB.
Note that JPEG2000 has no concept of a null value and will slightly alter
these uniform null values in large null area (which means, water areas in this
dataset). Thus a null mask is
necessary and is automatically used for display when pyramid tiers are used
because null values are not precisely conserved in sampling.
Using in TNTatlas.
This DVD provides a
layout that will open to show the World 1 km Color Image with World 1:1,000,000
Map Features Political Boundaries overlaid.
This layout is found in the WorldAtlas.rvc Project File within the
WorldAtlas folder. It can be opened
(File/Open Object) and viewed in the free TNTatlas
product available from www.microimages.com. The
layout is much more complex than the simple, initial view.
It groups objects by theme and uses map scale controlled visibility and
hidden layers to make the layout manageable and responsive.
Using in 3D
Displays.
These
global geodata sets can be used directly without modifications for a 3D view of
any area of the globe as illustrated in the color plate entitled Viewing
Global Data Sets in 3D. The
world image can be draped over the world elevation layer and map features and
symbols from the vector objects added as desired.
RV6.9 introduces a new 3D
option for near/far clipping of the vector objects overlaid into a 3D view.
Without far ground vector clipping, vector features can pile up and
obscure the horizon in long global 3D views as illustrated in the attached color
plate entitled Clipping Near/Far in 3D Views.
By setting the far clipping value in the 3D Viewpoint dialog you can set
a far distance (which means, the scale) at which vector features will no longer
be rendered near the horizon of the view.
Rendering
a large area in 3D from the global elevation layer takes an excessively long
time. This is because the 3D
viewing process forms triangulation for the entire elevation layer before
rendering regardless of the area covered by the required view.
This handicap of some 3D views of this global data will be rectified in
the patches to RV6.9.
Other Geodata Sets.
MicroImages
has acquired a higher resolution coastal boundaries data set prepared by NGA
(National Geospatial-Intelligence Agency).
The coastal feature layer on the enclosed DVD was extracted from
1:1,000,000 maps. This new global
feature layer called WVFPLUS was prepared from 1:250,000 scale topographic maps
in VPF format (382 MB). MicroImages
is currently initiating a similar cleaning effort to bring this new feature
layer into a Project File for future distribution.
Please
identify other global data sets in the public domain for possible preparation
and distribution with V7.0.
Floating
licenses to the TNT products can now
be administered on (which means, set up as the license manager) or float to any
supported Mac, Windows, Linux, and UNIX on your network.
In
an enterprise situation your TNT
activity may be such that you do not conduct it on a personal computer, but
share a workstation with others. This
workstation uses a fixed, attached TNT
software authorization key or uses the network to check out a temporary licensed
seat from a floating TNT license.
In either situation each user of the common workstation now has their own
TNT user profile and default
settings.
MicroImages
clients using RV6.9 of a TNT
product with Windows 95 and Windows 98 for the first time will be provided with
a URL to a short form at microimages.com to use to obtain a startup code.
By this route MicroImages can determine how many are still using these
older operating systems. From this
information it can be determined how much longer these versions of Windows will
be supported by the TNT products.
The eventual deprecation of these versions of the TNT
products will permit MicroImages to use network, file access and recovery,
security, and other features not supported by these older operating systems.
Mac
OS X 10.3 (Panther).
The TNT
products continue to operate as 32-bit applications under Mac OS X 10.2.x.
The
TNT products now operate as either
32-bit or 64-bit applications under Mac OS X 10.3.2 or later.
If you are using an earlier version of Panther (which means, 10.3 or
10.3.1), please install your free upgrade to V10.3.2 before using your TNT
product.
Mac
OS X.
A Mac OS X platform
can now be used as a TNT/FLEXlm license server for a floating license to the TNT
products. With this addition, any TNT
supported platform can see as the license manager or use a TNT
product as a client (Mac, Windows, UNIX, or Linux).
It also permits a floating license to be used on an all Mac OS X networks
as requested by various MicroImages? clients.
Background.
Previous
Incarnation.
Some
of you have used and others may recall that the TNT
products were distributed for several years for the DEC Alpha 64-bit processor
using OSF/1 (a variant of UNIX). These
were full implementations of the TNT
products and were fast relative to using 32-bit based computers available at
that time. This processor has gone
through several ownerships (to Compaq then to HP) and incarnations and
eventually it evolved into the server-only product line and quietly into
obscurity by HP. While the
Alpha-based desktop computer was a very powerful platform for the TNT
products, the processor was ?ahead of its time.?
The general public had not yet discovered Linux, digital cameras, video,
music, and the other popular applications of today. Intel?s gradual
incremental improvements of the clock rate of 32-bit Intel processors satisfied
their requirements at that time. Thus,
desktop platforms built around the 64-bit Alpha processor suffered from a lack
of wide-scale software and peripheral hardware support and it slowly died away
with DEC.
Behind
the scenes, MicroImages continued to build, maintain, and distribute the TNT
products for the Alpha-based platform until just recently when it was no longer
of interest to any client. Thus, it
proved to be no special difficulty to provide transparent TNT
support of the new 64-bit processors that are now making their entry into
desktop platforms and the marketplace. These
new 64-bit processors and the operating systems supported by RV6.9
of the TNT products are illustrated
on the enclosed color plate entitled TNTmips Does 64-bits (as well as TNTedit,
TNTview, and TNTatlas,
?).
Memory
Requirements.
TNT
software processes are written entirely by MicroImages and are very conservative
in the use of real and virtual memory. The
total real/virtual memory limitations discussed here for desktop 64-bit
platforms are only of general interest relative to their use for TNT
projects. However, memory
limitations may impact your use of these new 64-bit computers for other
software. Many products must load
all the required data into real memory (for example, Photoshop) or real plus
virtual memory. Using only real
memory guaranties maximum performance but may limit the size and scope of the
non-TNT aspects of your project
application. Thus, the notes that
follow on the amount of real or real/virtual memory supported by your potential
64-bit platform is of considerable interest.
No
currently available 32-bit version of Microsoft Windows can address a memory
space larger than 2 GB. This limit
is the sum of your real plus virtual memory!
Virtual memory only simulates the availability of additional real memory
up to this same limit of 2 GB. It
also permits each concurrent process in a multitasking system to seem to have
available up to 2 GB of real plus virtual memory.
However, allowing a process to ?go virtual? has a potential massive
negative impact on performance (up to 100 to 1) when writing and reading all or
part of each concurrent process?s memory image to and from a virtual copy on a
hard drive?the so called ?rollout and rollin? activities.
For
the moment 2 GB of real, fast memory is expensive and is more or less a
practical economic limit for most desktop systems running TNTmips.
Adding 8 GB of real memory to an Apple G5 would cost US$5000 and nearly
triple the platform?s price. However, when libraries created by others are
incorporated into the TNT products,
we can not control their demands for memory when huge geodata sets are
processed. For example, the Kakadu
library used for JPEG2000 compression uses memory to accumulate and hold the
characteristics of the image being compressed.
Thus the 2 GB real memory limit on Windows was recently a severe
performance handicap when MicroImages compressed a single 280-GB image into a
single, linked JP2 file and also into a comparable JPEG2000 compressed raster
object in a Project File. However,
by design, the amount of real memory available has little impact on the
decompression and use of this JP2 image or JPEG2000 compressed raster object or
MrSID and ECW compressed images. This activity will be discussed in more detail
below in the TNTmips / JPEG2000
section below entitled Improved Memory Management.
You have not encountered this memory limitation when compressing into
MrSID (*.sid) or ECW (*.ecw) in the TNT
products as the libraries that they provide for use without charge to other
developers, such as MicroImages, limit the size of the image that can be
compressed. MicroImages has no
information about how their commercial for sale implementation of the MrSID and
ECW compressor products deal with the potential memory requirements of huge
input images.
Apple?s
Mac OS X for the G5.
The RV6.9
CD for Mac OS X contains both the 32-bit and 64-bit versions of the TNT
products. It took about 1 day to
make the minor modifications to TNTmips
to bring it up the 64-bit version on the G5 once its build environment was
established. If you are now using
an Apple G5 based platform (single or dual processors), you can install separate
32-bit and 64-bit versions of the TNT
products and run either one with your HASP USB software authorization key.
MicroImages has both single and dual G5 based platforms running as part
of its software testing and verification activities.
You will see a
marked difference in speed on the G5 between the 32-bit and 64-bit TNT
processes that are computationally intense.
However, as most now know, the latest version of the Mac OS X (Panther
version 10.3.2) does not provide a full 64-bit implementation. It
still shares the 32-bit versions of most of its lower level libraries.
Thus, you will not yet get the full capabilities of a 64-bit version of
your software. The most significant
limitation is that the currently available Apple platforms permit the
installation of 8 GB of real memory, but due to these 32-bit libraries each TNT
and other software process is limited to using 2 GB of real memory.
The Apple G5
platforms are extremely well designed and constructed and incorporate many other
incremental hardware advances beneficial to the operation of heavy duty software
such as TNTmips, such as Firewire
800, 1 G-bit Ethernet, fast hard drive and ATA serial drive support, lower cost
for dual processors, faster memory, and so on. All
these factors contribute to making this an excellent 32-bit or 64-bit platform
for TNTmips.
Sun?s
Solaris 9.x of the SPARC.
The
TNT products are now available for
this full 64-bit implementation of Solaris 9.x.
Both the 32-bit and 64-bit versions for this platform are provided on the
RV6.9 UNIX CD (you can use the
32-bit version for Solaris 8.x or 9.x.). This 64-bit implementation will permit
the TNT products to address and use
a much larger memory space than the 2 GB limits of every 32-bit version of
Windows and Mac OS X.
AMD?s
Athlon and Opteron using SuSE Linux 9.x.
MicroImages
has built and will provide 64-bit versions of the TNT
products for 64-bit SuSE Linux. Only
1 version of the TNT products is
needed for platforms using a single F64 or FX64 or dual Opteron processors.
AMD has stated, and MicroImages has verified in its AMD FX and Opteron
test machines with these processors, that a properly build 64-bit operating
system and application software will run with all of these 64-bit AMD
processors. Furthermore, the
generic 32-bit Linux version of the TNTmips
products is also available for these platforms.
Redhat
64-bit Linux should become available soon for platforms using AMD 64-bit
processors. MicroImages Linux
64-bit support is generic and, thus, will be usable immediately or with minor
patches for this platform.
SuSE
Linux on these platforms can use much more real memory than the 2 GB limit.
However, please carefully check the actual motherboard you are interested
in to see how much real memory can be added, what type of memory is used, how
much it costs, how many slots are currently occupied, and so on.
Hardware vendors are not going to the added expense of supporting
additional control logic and memory slots for real memory beyond what they
expect their buyer to use in the machine they are currently selling (which
means, they are not considering your future requirements).
A typical dual AMD Opteron server type implementation used at MicroImages
has 8 memory slots and can address 16 GB of real memory, whereas the low-cost
desktop AMD F64 desktop models being used can only accommodate 2 GB in 2 slots.
AMD?s
Athlon and Opteron using Microsoft?s Beta 64-bit Windows XP.
MicroImages
has also built the TNT products for
use with Windows 64-bit XP beta version to be officially released late this
year. A 64-bit version of the TNT
products is available for this platform. So
far MicroImages has not experienced any particular problems with either this
beta version of Windows XP or the TNT
products running under it. Platforms using these AMD 64-bit processors can also
run 32-bit Windows application software without alterations.
MicroImages has also verified that the RV6.9
of the TNT products for Windows will
completely function with the Windows XP, 2000, and 2003 supplied with these
platforms. A Windows 64-bit 2003
beta version is available for this platform but has not yet been tested with the
64-bit version of the TNT products.
Until
recently MicroImages has been using the same AMD-based test platforms for the
Windows 64-bit XP beta and SuSE Linux versions of the TNT
products. While inconvenient,
MicroImages has verified that you can set-up additional hard drives to boot into
either 64-bit Linux or Windows on these platforms.
However, when using this beta version of XP you will encounter the same
hardware vendor imposed real memory limits as discussed above.
Itanium
II using Linux or Windows XP.
TNT
products are not available for this platform.
Intel has targeted the Itanium II chip at the multiprocessor server
market. As a result even the
cheapest implementation of a desktop computer using a single Itanium II chip is
US$5000 and Microsoft?s price for this version of Windows is another US$3000.
For this reason at this time MicroImages does not consider that there is
a viable market for the TNT products
on Itanium II based platforms, especially in light of the fact that a good AMD
Athlon F64 desktop machine can be purchased for US$1000 and used with the 64-bit
version of SuSE (US$300)! Recently
Intel has ?discovered? that AMD and Apple are aggressively pushing into the
64-bit desktop market. As a result,
there are now rumors of the imminent announcement of a 64-bit Intel chip
oriented in price toward your desktop that will run the same 64-bit Windows XP
and all your 32-bit version applications.
There
were 5215 successful and complete downloads of TNTlite
from microimages.com in 2003, which was up 42% over 2002.
DV7.0,
the current development version of the TNT
products has been converted to use the C++ compiler in Visual Studio .NET 2003,
Professional Version. Those using DV7.0
of the TNTsdk will need to convert
to this programming environment.
A
better public domain documentation program called Doxygen (www.doxygen.org) is
now being provided for your access to the TNTsdk
online documentation for PV6.9
maintained at www.microimages.com/products/tntsdk.htm.
Introduction.
TNTsim3D
continues to expand to provide you with a better and even more unique FREE
product to publish, distribute, and permit free use of your geospatial products.
While it lacks some of the cosmetic features of other simulation tools,
it now provides even more features tailored to the presentation of your project
results. The new interactive
interface makes TNTsim3D more
intuitive to operate. However, the
most unique and powerful new feature is the ability of TNTsim3D
to execute the various kinds of scripts created in SML.
Depending upon your design, these scripts provide interactive tools
during the simulation or will even operate the simulation as a movie. For
example, you can add unique new interactive tool(s) using SML
Tool Scripts, log and play back a flight path as illustrated in a new sample
script, or even use a script to analyze the geodata while it is being used by
the simulation. An even more advanced capability now in development is the use
of an SML script to allow TNTsim3D
to communicate with and interact with a program you create in Visual Basic or
C++.
TNTsim3D
Interactive Tools.
The tools used to
interact with the views in TNTsim3D
have been reorganized and restructured and are now selected from icon buttons on
a toolbar in the Main View. These
include the Viewpoint Tool, Point-Of-Interest Tool, TNTatlas Tool, and the Point
Overlay Tool. The first three of
these were available from the menus in V6.8.
The Point Overlay Tool is a new addition that allows you to select, move,
and edit point features, show DataTips for points, and to retrieve and edit
single record views for records associated with the points.
The attached color plate entitled TNTsim3D Interactive Tools
illustrates the user interface and use of the 4 tools.
Viewpoint Tool.
The Viewpoint tool
functions as in earlier versions of TNTsim3D.
With this tool turned on, left-click in any view with the mouse to
recenter all views to that position. The
input controls for moving through the simulation (joystick, keyboard, and mouse)
are only active when this tool is turned on.
Point-Of-Interest
(POI) Tool.
The basic
functionality of this tool is the same as in earlier versions of TNTsim3D,
but it has a new interface and several new features.
As in previous versions, when you add a Point-Of-Interest, a marker
symbol appears in each view window and a Point-Of-Interest view can be opened
and remains centered on that location as you move through the simulation.
To create a Point-Of-Interest, you now press the Create POI button in the
Points-Of-Interest dialog window (instead of left-clicking with the mouse in a
view, as in V6.8).
Then use the mouse in any view to drag the POI marker to the desired
location. When you add a POI, a
record is added to the POI table, which is shown in the Points-Of-Interest
window. Use this window to edit
each point?s characteristics. You
can change the point name (used as the title for its window) and edit its
coordinate position for precise placement within the landscape.
You can also pick the marker used to identify each POI (from an Arrow,
Arrow-Head, Cursor, Bar, or Flag shape) and select the color of each individual
marker. POI symbols can now also be
toggled on and off in all views. You
can also choose whether or not to open the POI view automatically for subsequent
new POIs.
TNTatlas Tool.
This tool is
approximately the same as in earlier versions of TNTsim3D.
The TNTatlas tool's dialog shows you a list of the associated and
available atlases or permits you to navigate to locate and add an atlas to this
list. After an atlas has been
selected, left-click with the mouse in any view to open the TNTatlas
program, which shows its composite view of that point in a new window.
TNTatlas can then be used to
gain access to its extensive collection of information about that point
including a linked database, to make measurements of features, and use its many
interactive GIS functions.
Point Overlay Tool.
This tool presents
a variety of new capabilities for use with any point overlay layers that you
have selected for viewing from the Layer menu.
The tool and its Point Overlay dialog window allow you to select, move,
edit, insert, delete, and save point features including Billboards & Stalks
and the positions of Volumes-Of-Interest. The
tool can also open the standard TNT
Single Record dialog for the point to show/edit its detailed attributes.
Some of these new capabilities are illustrated in the color plate
entitled Edit Point Positions in TNTsim3D.
Selecting
Points.
To pick a point
from any of the selected overlays, left click the mouse directly on its symbol
in any of the various views. This
will highlight the point with a flashing color in all views in which it is
visible to show the point is selected and ready for moving, editing, and so on.
You can define the highlight color in this dialog to help you determine
which point has been selected against your specific landscape background.
Left-click away from any overlay point to deselect the current point.
Editing Points.
After you have
selected a point, turn on the Edit Point check box in the Point Overlay tool to
enable any of the following editing operations for that point.
Moving Points.
With the Edit check box turned on, you can move the point by dragging its
symbol with the mouse. You can also
reposition the point by editing its coordinates in the Point Overlay tool?s
dialog.
Copying Points.
Press the Copy Point button to create a copy of the selected point nearby
on the terrain. The copy is automatically selected and highlighted and its
original version deselected. You
can then use the mouse to drag this new duplicate point around on the terrain.
It will be an exact duplicate of the original point in appearance and a
duplicate record of it is added to the corresponding tables at this new
coordinate position.
Deleting Points.
Press the Delete Point button to delete the selected point from the
overlay.
Editing Point
Names. The name of the point in
the Point Overlay tool?s dialog can be selected and edited.
If you have created a DataTip in another TNT
product for this point (see section below), it will be the name of the point and
can be edited here.
Saving
Overlays.
None of the
original Point Overlay layers you create in the Landscape Builder can be
permanently deleted or altered in this tool. In other words, the original
overlays are protected layers that will always appear on the Layer menu for that
landscape. However, if you have
altered any point or points in one of these original layers, you can save a new
layer with all of these alterations.
Save Overlay As.
If you press the Save Overlay As button, you can save the layer
containing the selected point with a new name, creating an entirely new layer
that includes all the changes you have made to any point in that layer.
The new overlay will then appear on the layer selection list for this
landscape. You can create as many
new Point Overlay layers as you wish.
Save Overlay.
You may subsequently alter a point layer that you saved within TNTsim3D
(rather than an original overlay created in the Landscape Builder).
Pressing the Save Overlay button deletes that layer and replaces it with
a new layer of the same name containing the latest alterations. If
the point layer being edited was created in the Landscape Builder, pressing the
Save Overlay button does not replace the existing overlay, but prompts you to
name a new layer (as for the Save Overlay As... button described above).
Delete Overlay.
Pressing the Delete Overlay button deletes the entire point layer you
have selected if it was previously created in TNTsim3D.
It is not available if the layer you have selected was created in the
Landscape Builder. This Delete
Overlay button is not used to undo individual alterations to selected points.
Show Detailed
Information.
The Landscape
Builder transfers some attributes and derives others to build point overlay
layers into the Landscape File. Within
TNTsim3D you can gain access to all
these attributes using the Show Info check box in the Point Overlay tool?s
dialog. It opens the standard
Single Record dialog used in all TNT
products for the point selected. If
a new point is selected, this dialog will show the new point?s attributes. Since
this is the standard Single Record dialog, it provides the same tools found in
all other TNT products for editing
and altering the associated records in the table.
2D versus 3D
Points.
In the Landscape
Builder you can create point overlay layers from vector objects with either 2D
or 3D point coordinates. The points
in a 2D overlay are automatically locked to the terrain surface, whereas 3D
overlay points are simply inserted into the volume encompassing the landscape at
their XYZ positions. If you move
these 3D points around with the new Point Overlay tool, their Z-value remains
fixed and you might find that your point is now floating above the terrain or
buried beneath it. When using
vertical exaggeration in TNTsim3D
you have the option to choose if the Z-value of 3D points is held constant or
rescaled along with the terrain layer.
Regardless of the
vertical exaggeration, 2D points stick to the terrain when the Point Overlay
tool is used to move or add points. It
is, thus, possible to position a point behind a terrain feature where it is not
visible in the Main View from the current viewpoint.
Moving points around in the optional Map View is a good way to insure
that all possible surface positions can be directly used without having to
switch back to the Viewpoint Tool to adjust your view of the landscape.
TNTsim3D
DataTips.
The DataTips you
have defined for a layer in TNTmips
are processed in the Landscape Builder and passed into the Landscape File as a
text string of up to 256 characters. As
in the other TNT products, TNTsim3D
exposes a DataTip when your cursor is near the associated point.
Thus, while you use your joystick to move the view around in the
simulation, you can use the mouse cursor to reveal descriptive information about
a particular point. This is
illustrated in the attached color plate entitled DataTips for Point Overlays
in TNTsim3D.
If the DataTip
defined in TNTmips contains virtual
fields (computed fields) the Landscape Builder evaluates these fields and
converts them into their equivalent character string for use in the Landscape
File. You can use virtual fields to
combine multiple fields and text into an attractive DataTip for display in TNTsim3D,
including multiline DataTips.
Within TNTsim3D
you can not permanently alter the DataTips text string for the associated
original billboard overlay set up by the Landscape Builder.
However, you can edit this DataTip text string or name associated with a
symbol and save these changes as a new overlay layer.
Simply select a symbol with the Point Overlay tool, edit the name field,
and save the new overlay. You can
then select either the original or altered layer to provide the DataTips for
your simulation.
Using
SML Scripts.
Introduction.
MicroImages?
software tools enable the FREE distribution of your geospatial project results
in hardcopy or in digital form via TNTatlas,
TNTsim3D, and TNTserver.
These products are part of a 3-tier approach designed to let you serve
different levels of end user involvement. MicroImages
creates the professional and the free publishing software.
Most of you use this software to conduct complex projects.
However, often you are not the end user of the results of your projects.
As a geospatial analyst, your end users are usually your consulting
clients or others in your company. The
professional TNT products that you
purchase provide the geodata preparation and analysis tools, and the FREE
products provide a means by which you can distribute your results in an
organized, more easily understood form, with some simple end-user oriented
tools.
Using SML
within TNTsim3D provides you with
one more unique means of satisfying your client?s interests.
Using SML scripts, you can
add custom effects and movement capabilities to your TNTsim3D
simulations. A script can merely
vary the environmental parameters of the scene or actually move the viewer in
predetermined ways through the landscape. The
simulation you build in the Landscape Builder can be distributed with your
custom scripts along with the free TNTsim3D
program. While your client can use
your SML scripts they can not easily
create or edit scripts in TNTsim3D?this
is the area of your expertise working together with MicroImages and the
professional TNT products.
As before, you must
create your SML script in your
professional TNTmips, TNTedit,
or TNTview where the tools are
available to access the latest functions and classes, view the documentation and
samples, edit, check syntax, and so on. However,
when a script is completed it can now be saved as an object within the Landscape
File containing the simulation?s geodata.
When this Landscape File is loaded in TNTsim3D,
these SML scripts appear by name and
can be run from the new Script menu. This
procedure is discussed in more detail and is illustrated by example scripts in
the attached color plate entitled Customizing TNTsim3D with SML.
These sample scripts can be downloaded from www.microimages.com/downloads/TNTsim3Dscripts.htm.
Day-Night Cycle SML
Sample Script.
The simple sample SML
script (daynight.sml) is illustrated and provided with comments on the attached
color plate entitled Customizing TNTsim3D with SML.
While you are moving about in the main view, running this macro script
repeats a day-night or diurnal lighting cycle.
It cycles the sky color from blue to black and cycles the terrain from 0
to 100% black fog to simulate day/night lighting.
The sun angle is also varied during the daylight portion of the diurnal
cycle. Please note from the back of
this plate that all of this can be accomplished for every frame in the
simulation using an SML script of
about 25 lines.
Orbit Sample
Script.
The simple sample SML
script (simorbit.sml) is illustrated and provided with comments on the attached
color plate entitled Customizing TNTsim3D with SML.
When this script is selected from the menu, it suspends motion in the
simulation and orbits the main view around its current center point.
At any point the operator can break out of this orbit and continue motion
controlled by the input device by toggling off the script on the Script menu.
Again please note that this Macro Script is about 20 lines long.
Using this script as a starting point, you could easily modify it to move
the simulation through other kinds of preprogrammed motions such as panning the
view outward from the current nadir position, zooming in and out, book marking
views, jumping from view to view, and so on.
All these lead to an even more complex script that records a flight path
and plays it back.
Flight Recorder
Sample Script.
You have previously
requested that a flight recorder be added to TNTsim3D.
This has now been implemented using an SML
script that is reviewed in the attached color plate entitled Create Flight
Paths in TNTsim3D via SML. This flight recorder is controlled by a simple
Recorder dialog box set up in SML.
This dialog provides simple buttons to Record, Play, Pause, and Stop. The
plate illustrates and annotates important sections of this script.
You can use this
completely open script as a starting point to develop unique flight controls.
For example, the parameters recorded for each view are clearly
identified. Thus, you can obtain
flight coordinates from some other external device (truck, plane, ?) or
mathematical or empirical expression, process them into this format, and play
them back in a simulation probably compressing time.
Now you can use the SML
skills you have honed in preparing ?2D? SML
scripts in real time in TNTsim3D.
Miscellaneous.
Depending upon your
requirements TNTsim3D can render
symbols at 3 different quality levels. The
options panel now provides Good, Better, and Best for symbol rendering.
A comparison of the results for the Good and Best setting is provided in
the attached color plate entitled Symbol Rendering Quality in TNTsim3D.
Both are acceptable, but for faster rendering and thus higher frame
rates, the good setting may be more than satisfactory.
A toggle is now
provided on the MapView to show/hide Point-Of-Interest locations.
Holes
in polygons (for example, courtyards in buildings) are now represented in their
extruded shapes.
DV7.0?Action
Points.
Current
developments in using SML in TNTsim3D
are focused on implementing the concept of ?Action Points.?
These are points in the Landscape that may or may not be represented by
overlay symbols. Action points will
define an XYZ position, a radius of action in ground units (e.g., meters), data
describing the points, and the event to execute.
The radius defines a sphere of influence around the points within which
that event will be invoked. Action
points might be used in many ways and many different kinds of events defined for
subsequent use in SML or in an
exterior program. For example, the
action might be triggered when the viewers XYZ position, nadir position, center
line view projected to the ground, ? enter or leave the point?s action
sphere. The SML
event triggered by this might vary widely.
The initial example event that will be illustrated in a color plate will
be the action of starting an external Visual Basic program.
This VB program will expose a separate dialog window containing a record
describing the point in a non-TNT
database. This could be a database
whose records are being maintained and updated by some other system(s).
Customizing
the User Interface.
Introduction.
Groups
and layouts are used to define and maintain the relationships between
collections of TNT geodata objects
in various Project Files or linked external files in other formats.
Creating a layout organizes a collection of specific geodata into a
meaningful unit and defines how it is used together in your atlas.
RV6.9 permits the layout(s)
you use to define the menu items and icons used in a TNTatlas
to simplify its view. They are now
also the means by which your custom tools (Macro Scripts and Tool Scripts) are
included and conveyed in a specific atlas and automatically loaded by the TNTatlas.
Simplifying the
View.
Periodically
there have been requests to permit a TNTatlas
to have a simpler user interface. V6.3
introduced the use of the Customize window to globally control the options
presented in your TNT product?s
menu and icon tool bar. Now in RV6.9
you can use this same approach to attach these customization preferences to
individual groups and layouts. Whenever
you are viewing a group or layout you can now choose Options / Customize /
Window and check off or on every icon on the icon tool bar or menu entry.
This is illustrated in the attached color plate entitled Customize
Your View Interface. This will
change the interface for all subsequent groups or layouts in TNTmips.
To change the icons and menus present in TNTatlas,
you must run TNTatlas and choose
Options / Customize / Window in the TNTatlas
window and make the desired changes. These
will be transferred by the Wizard to your TNTatlas
if you so choose and will determine the specific user interface presented for
each layout as it uses them. This
is illustrated in the attached color plate entitled Providing a Customized
Interface for TNTatlas/X
* Adding Custom
Tools.
The special tools
(Tool Scripts) and queries (Macro Scripts) you create in SML
to extend the capabilities of TNTatlas
are usually very ?data dependent.? For
example, you might create a Tool Script that pops in a window presenting a form.
This form collects the information from the user to populate a query
acting on the attributes of a vector layer in the atlas.
The Tool Script menu can present these as a list of these named,
?canned? or predefined queries, each presenting such a form or dialog box.
For example, you might have a series of Tool Scripts accessed via the
menu entries of ?Find Your House,? ?Find Your Street,? ?Find Your
Nearest Public Park,? and so on. The
content, appearance, constraints, and other features of these forms might be
implemented directly in SML using
X/Motif or XML or as an ActiveX component program using C, Visual Basic, or
Java. Each can present a different
form window of your design to collect input from the user to populate and
execute these specific queries on the atlas.
To streamline the
use of different scripts for data dependent custom tools in each TNTatlas,
Macro Scripts and Tool Scripts can now be stored with a specific group or
layout. When a layout with saved
Macro and/or Tool Scripts is selected in your TNTatlas
these associated scripts are added for your use to the menu bar or additionally
as an icon to the toolbar of the View window.
Unlike the previous approach, which is still available, these menus and
icons are not always part of every view in your TNTatlas,
whether they apply or not. Now they
are only presented when the layout to which they are attached is displayed.
These new procedures for managing and applying custom tools are
illustrated in the attached color plate entitled Use Layouts to Customize
TNTatlas/X.
Delivering scripts
with the layouts used in a TNTatlas
is a particularly effective way to add unique tools related to the specific
contents of the atlas and have them automatically installed for use in any TNTatlas
software. For example, you may have
a local copy of TNTatlas installed
on your hard drive and select from a variety of atlases located elsewhere on a
network. Each atlas you select will
load the tools associated with it to your view as you use that atlas.
Miscellaneous.
ActiveX.
Your TNTatlas
can now be accompanied by, launch, and communicate with separate programs using SML
Tool and Macro Scripts via ActiveX. This
is discussed in more detail in the SML
section below.
HyperIndex
Navigator for X and Windows.
HyperIndex links
are used to connect areas in the current view of your atlas to new layouts,
which define completely new geospatial layers.
These link areas may or may not be visible and often do not cover the
complete area of the view. For
example, perhaps you are viewing an image of a state or province and the link
areas are not showing but represent counties, grid squares, or some other
subunits. Often the state boundary
is irregular and perhaps you do not have the geodata to build layouts for every
subarea. Now as you move the cursor
around in the view, the cursor will be the selection arrow if you are not in a
link area and will change to the pointing hand if you are over a HyperIndex link
and can click the mouse and navigate to it.
This is illustrated in the attached color plate entitled Link-Sensitive
Cursor with Navigator Tool. Please
note that the shape of the pointing hand cursor used in RV6.9
and shown in this plate has been changed to the more widely recognized pointing
hand cursor. This is now the same
cursor that Microsoft?s Explorer and Apple?s Safari shows to indicate that a
view position has a URL attached for navigation to another page.
This new cursor will be installed as part of your next patch.
TNTserver
will now use the same version numbering scheme as the other TNT
products.
New
Formats Served.
Previous versions
of TNTserver responded with the JPEG
version of the composite view to a TNTclient.
Now the TNTclient can request
that the server respond with content in several new formats.
The atlas data does not necessarily have the objects stored in these
formats and which one is requested depends on what the client will do with the
response.
JPEG2000 Compressed
(JP2) Rasters.
The raster
requested for a complex view can now be a JP2 file of any specified lossy or
lossless compression ratio. A JP2
file can be of better quality and smaller size than a JPEG file.
This provides a better view using less bandwidth.
However, the temporary disadvantage that the browser user must install a
free plug-in, such as the one provided by Adobe.
This plug-in could be installed automatically like many others, but this
adds the disadvantage of significantly slowing down the initial use of the TNTclient.
Eventually browsers will add decompression and use of JP2 files.
Apple?s Safari V1.2 browser already supports JP2 files as well as JPEG
and PNG and using their implementation in QuickTime. Morgan,
a French company, provides a good plug-in for JP2 support for Internet Explorer
at www.morgan-multimedia.com/JPEG2000/.
Portable Network
Graphics (PNG) Rasters.
The raster
requested for a complex view can now be a PNG file with lossless compression, a
transparency layer (which means, an alpha or RGBA layer), embedded text, and
other features. PNG files are
widely used in connection with HTML and do not require a plug-in for the
browser. Just as with JP2,
requesting PNG from TNTserver rather
than JPEG, can return lossless images and the browser?s client and its end
user can control the scale of the view and, thus, the degradation of the image
it renders.
Scalable Vector
Graphics (SVG) Content.
TNTserver
can now respond with SVG content incorporating vector or CAD elements
accompanied by embedded attributes, images, and javascript tools.
Additional details on the new process used by TNTserver
and the other TNT products to create
interactive SVG layouts and its new capabilities are provided in the section
below entitled Render to SVG.
Serving up an SVG
layout, just as with Flash,
requires that end users? web browser have an SVG plug-in to interpret it.
SVG plug-ins, actually
called SVG viewers, are available from several sources such as Adobe, Corel, and
others. Adobe?s SVG Viewer (ASV)
and Batik, a Java based suite of SVG applications including a standalone viewer,
are probably the most fully implemented of the SVG viewers currently available. The
SVG content returned from TNTserver,
including the latest sample embedded javascript tools, work in the Abobe and
Corel viewers as well as Batik?s standalone viewer.
MicroImages
recommends the Adobe SVG Viewer 3.0 (which means, browser plug-in) that can be
downloaded from www.adobe.com/svg/# for Windows and the Mac OS X for use in 14
languages. Corel?s SVG Viewer can be downloaded from www.corel.com/svgviewer/.
Batik?s standalone viewer can be downloaded from
http://xmlgraphics.apache.org/batik/.
Konquerer (Linux KDE environment),
Mozilla and several Mobile phones and PDAs are implementing built-in or plug-in
based SVG support as well. A
detailed list of SVG viewers, conversion tools, and editors is provided the by
the World Wide Web Consortium at www.w3.org/Graphics/SVG/SVG-Implementations.
HTML-based
TNTclient.
MicroImages?
HTML-based TNTclient can request an
SVG layout from TNTserver.
While this client was highly and easily customizable, since it used only
HTML, many of its features are not needed or are just not as useful in the
context of receiving an SVG layout. In
other words, the existing interactive features of this TNTclient,
and many more, could now be assumed by the combination of using an SVG layout
with its associated javascripts and its linked content. The
following examples will clarify this. Additional
information and illustrations of these SVG features are also discussed in the
section below entitled Render to SVG.
Layer
Control.
Adding and deleting
layers using the sample HTML-based TNTclient?s
layer control panel require a new request to TNTserver
and the return of the revised raster view.
Using SVG content provides this control of the graphic layers and
embedded images to the TNTclient.
This sample client can now use either both or one of these methods.
DataTips.
Using the sample
HTML-based TNTclient?s InfoTips
panel requires a request to be sent to TNTserver
to retrieve the database (which means, attribute) information about the selected
feature. Database information can
be encapsulated with the graphical features requested as part of the SVG content
returned from TNTserver.
This means that the client can now interactively expose this same
information as DataTips for each graphical element. In
addition, if desired, the element (for example, line or polygon) to which the
DataTip applies can be automatically highlighted, filled, or blinked).
Measurement
Tools.
Measurement tools
in the HTML-based TNTclient?s
measurement panel do not require a return trip to the TNTserver
since they can be made using only HTML and acting on the georeferenced raster in
the view. However, interactive
javascript measurement tools can also be provided to a client as part of the SVG
content returned by TNTserver.
Limitations.
Current HTML/SVG
incompatibilities in the sample HTML-based TNTclient
are that zoom requests made from within the HTML do not take into account zoom
levels in the SVG layout and HTML interface elements cannot overlap the display
area used by the SVG viewer.
DV7.0?Supporting
OpenGIS?s Web Map Service (WMS).
Introduction.
MicroImages is
currently extending the TNTserver to
implement the protocol specified for the Open GIS Consortium?s (OGC) Web Map
Service (WMS) V1.1.1 (see www.opengis.org/specs/?page=specs).
When this is available, TNTserver
will still access a TNTatlas layout
and return either a JPEG, JP2, PNG file(s) or an SVG layout, but will then
respond to requests issued using either the WMS or the current TNT
protocol.
What Does It Add?
A browser-based
client, or any client can issue requests to any available server implementing
this WMS protocol and expect it to respond correctly if it has implemented that
particular feature of the protocol. As
a result, this next version of TNTserver
will also respond to any of these clients written by others that issue WMS
requests. Conversely, a client you
write or sample TNTclients written
by MicroImages can issue requests to any WMS site as well as TNTserver
and combine the results as appropriate. Furthermore,
supporting requests using only the
WMS protocol will not require the use of the Tomcat service with TNTserver.
Clarifications.
Supporting WMS
protocol indicates that a server will respond to requests that come to it using
its documented protocol. The
designation that a server (for example, TNTserver)
implements WMS 1.1.1 protocol does not mean that it will respond to every
possible WMS request to it. It also
does not prevent that server from responding to requests in any other additional
protocol it may support as an alternative to or extension of the WMS.
Whether or not a server responds correctly or at all to a specific
request can vary widely. Almost all
server products listed as supporting WMS are in category on the OGC site
(see www.opengis.org/resources/?page=products) designated as ?Implementing
Products, that is, software products that implement OpenGIS Specifications but
have not yet passed a compliance test.?
Thus a close inspection of this OGC listing of server products reveals
that at present, only 3 commercial server products are certified by OGC as WMS
1.1.1 ?Compliant Products, that is,
software products that comply to OGC?s OpenGIS® Specifications.?
These must be further qualified by OGC adding that ?Compliance
tests are not available for all specifications.?
It is also
important to understand that a server?s implementation of WMS 1.1.1 may also
be restricted to responding to requests in that protocol. Thus,
the designations that a product implements WMS does not mean that the server can
issue requests in WMS or any other protocol to other WMS sites either locally or
over a network such as the internet. To issue such requests, the server must
support the additional capability of acting as a client to other WMS sites.
Cascading
Service.
The use of the term
clients used in servers in this context can be confusing because it is popularly
used to indicate the software implementation for the end users, such as the
party using the browser or other human interfacing product.
However, in a generic computer sense, being a client indicates a source
of information that will respond to an inquiry.
Thus, if the server product supporting WMS has an embedded client capable
of issuing requests, in WMS protocol it is called a Cascading Web Map Service (CWMS).
The WMS 1.1.1 specification states:
?A
?Cascading Map Server? is a WMS that behaves like a client of other WMSes
and behaves like a WMS to other clients. For
example, a Cascading Map Server can aggregate the contents of several distinct
map servers into one service. Furthermore,
a Cascading Map Server can perform additional functions such as output format
conversion or coordinate transformation on behalf of other servers.?
Current Activity.
At this time
MicroImages is implementing the client level capability in TNTserver
to enable it to respond to requests from other servers.
Eventually this will enable TNTserver
to act as a CWMS. As is the current
case, TNTserver will continue to be
available for Windows-based platforms.
TNTview
is a subset of TNTmips and is
available on every supported platform including the new RV6.9
64-bit options introduced above. The
processes in TNTview and TNTmips
use the identical code. TNTview
provides all of TNTmips that deals
with the management and 2D and 3D visualization of geospatial data, but does not
support its creation. As a result TNTview
provides more than 1/2 of all the code and features making up TNTmips.
Your software authorization key or floating license determines the subset of TNTmips
that will be installed and available for use with your TNTview
license. In fact, various sites are
using single floating licenses to serve out their independently licensed seats
for TNTmips, TNTedit,
and TNTview.
Inherited
New Features.
TNTview
V6.9 provides all the following new features introduced in detail in the
appropriate sections indicated in parentheses for TNTmips.
All of the system
level changes. (see System Level
Changes.)
Expanded TIFF
support, which links and imports more varieties of TIFF files.
(see TIFF
Support.)
Huge raster objects
that now use JPEG2000 compression can be displayed in seconds regardless of
their size. You can now also link
to, and display equally huge GeoJP2 files (compressed with JPEG2000 and
georeferenced). Linking to external
JP2 files was provided in V6.7.
(see JPEG2000
Compression.)
All the new 2D
display improvements are available including antialiasing of thin lines, a more
compact LegendView, revised Macro and Tool Script installation, linear color
transparency in color palettes, dual coordinate readout, and zooming to a
specific position. (see 2D
Display.)
All the new 3D
display improvements including the new faster, better dense ray casting and
variable triangulation terrain rendering models.
(see 3D
Display.)
The new leader and
frame controls for labels can be used. (see
Label Frames and
Leaders.)
Tabular views can
be refreshed for database tables that may be continuously changing, such as
those linked via ODBC. (see Refreshing
Tabular Views of External RDBMS.)
Importing Arc
shapefiles will include their point styles.
(see Arc Shapefile Point Symbols
Import/Export.)
Monitor and printer
color can be calibrated using the ICM (Microsoft) or ICC (Apple) color
management profiles provided for your hardware.
(see Color
Management.)
All the extensive
changes and additions to SML scripting are available.
These include improved documentation, examples, and management of
functions and classes; debugging including execution timing; interface design in
Visual Basic; and many other new technical features.
(see SML
Scripting.)
Upgrading
TNTview.
If you did not
purchase RV6.9 of TNTview
in advance and wish to do so now, please contact MicroImages by FAX, phone, or
email to arrange to purchase this version. When you have completed your
purchase, you will be provided an authorization code by FAX.
Entering this authorization code while running the installation process
allows you to complete the installation of TNTview
RV6.9.
The prices for
upgrading from earlier versions of TNTview
are outlined below. Please remember
that new features have been added to TNTview
with each new release. Thus, the
older your version of TNTview
relative to RV6.9, the higher your
upgrade cost will be.
Within the NAFTA
point-of-use area ( Canada, U.S., and Mexico) and with shipping by ground delivery.
(+50/each means US$50 for each additional upgrade increment.)
| TNTview
Product |
Price to upgrade from TNTview |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$175 |
275 |
400 |
500 |
555 |
+50/each |
|
for 1-user floating |
US$210 |
330 |
480 |
600 |
667 |
+60/each |
| UNIX
for 1-fixed license |
US$300 |
475 |
600 |
675 |
725 |
+50/each |
|
for 1-user floating |
US$360 |
570 |
720 |
810 |
870 |
+60/each |
For a point-of-use
in all other nations with shipping by air express.
(+50/each means US$50 for each additional upgrade increment.)
| TNTview
Product |
Price to upgrade from TNTview |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$240 |
365 |
465 |
545 |
605 |
+50/each |
|
for 1-user floating |
US$288 |
438 |
558 |
654 |
726 |
+60/each |
| UNIX
for 1-fixed license |
US$350 |
550 |
700 |
800 |
850 |
+50/each |
|
for 1-user floating |
US$420 |
660 |
840 |
960 |
1020 |
+60/each |
The standalone TNTedit
product is increasing in popularity as it is recognized as a superior spatial
data editor. It is being used to
create or import and improve complex spatial data for subsequent export and
analysis in other products. TNTedit
is a subset of TNTmips and is
available on every supported platform including the new 64-bit RV6.9
options introduced above. The
processes in TNTedit and TNTmips
use the identical code. Your
software authorization key or floating license determines the subset of TNTmips
that will be installed and available for use with your TNTedit
license. In fact various sites are
now using single floating licenses to serve out their independently licensed
seats for TNTmips, TNTedit,
and TNTview.
Inherited
New Features.
It has become
repetitious and perhaps confusing to summarize for each new release the many new
features TNTedit has in common with
those added by virtue of its shared code with TNTmips.
Those who own and use TNTedit,
but not TNTmips, can read the
complete new descriptions of their new features in their detailed introductions
and supporting color plates in the major TNTmips
RV6.9 section below. Almost
everything that has been added in RV6.9
for TNTmips is available for use in TNTedit
RV6.9 with the following exceptions.
All aspects of the
use of JPEG2000 compression detailed for TNTmips
can be used in TNTedit and TNTview
except that raster extract is an image analysis feature of TNTmips
and is not included in either of these products.
Therefore you can not use raster extract as described below to reduce the
size of your JPEG2000 compressed raster or any other raster object or divide it
into tiles in smaller raster objects.
TNTedit
and TNTview do not provide the TNTatlas
Assembly Wizard. However, this does
not prevent users of TNTedit from
assembling and making layouts and assembling them into atlases for use in TNTatlas.
Thus, all the new features detailed below can be applied to atlases you
assemble in TNTedit.
These atlases can then be used with the FREE TNTatlas
product.
You can add and
test Ground Control Points (GCPs) with the new Rational Polynomial Coefficient (RPC)
image model. However, since TNTedit
does not include image analysis processes, you can not use raster resampling and
the various image transformation models it provides including the new RPC
satellite image orthorectification model. As
a result as a user of TNTedit, you
can skip the long section below entitled Rectification of QuickBird and IKONOS
Images explaining this new TNTmips
feature.
Upgrading
TNTedit.
If you did not
purchase RV6.9 of TNTedit
in advance, and wish to do so now, please contact MicroImages by FAX, phone, or
email to arrange to purchase this version. When you have completed your
purchase, you will be provided an authorization code by FAX.
Entering this authorization code while running the installation process
allows you to complete the installation of TNTedit
RV6.9.
The prices for
upgrading from earlier versions of TNTedit
are outlined below. Please remember
that new features have been added to TNTedit
with each new release. Thus, the
older your version of TNTedit
relative to RV6.9, the higher your
upgrade cost will be.
Within the NAFTA
point-of-use area ( Canada,
U.S., and Mexico
) and with shipping by ground delivery.
(+$50/each means US$50 for each additional upgrade increment.)
| TNTedit
Product |
Price to upgrade from TNTview |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$350 |
550 |
700 |
800 |
875 |
+50/each |
|
for 1-user floating |
US$420 |
660 |
840 |
960 |
1050 |
+60/each |
| UNIX
for 1-fixed license |
US$650 |
1000 |
1350 |
1600 |
1750 |
+50/each |
|
for 1-user floating |
US$780 |
1200 |
1620 |
1920 |
2100 |
+60/each |
For a point-of-use
in all other nations with shipping by air express. (+$50/each means US$50 for
each additional upgrade increment.)
| TNTedit
Product |
Price to upgrade from TNTview |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$400 |
750 |
950 |
1100 |
1200 |
+50/each |
|
for 1-user floating |
US$480 |
900 |
1140 |
1320 |
1440 |
+60/each |
| UNIX
for 1-fixed license |
US$750 |
1200 |
1550 |
1850 |
2000 |
+50/each |
|
for 1-user floating |
US$900 |
1440 |
1860 |
2220 |
2400 |
+60/each |
There are now 76 TNT
Tutorial and Reference booklets. These
booklets provide more than 2000 pages and over 4000 color illustrations.
The most important of these booklets are up-to-date with the features in RV6.9
of the TNT products.
However, others still show minor differences primarily in the user
interface illustrations of earlier TNT
versions. For a quick overview of
the added and revised booklets please see the attached color plate entitled New
Tutorials. Additional revised
booklets, will be provided as completed for your downloading via microimages.com
as part of the patches issued for RV6.9.
Each new
professional TNTmips ships with 3
thick notebooks containing a color printed copy of these 76 booklets.
Those of you receiving your RV6.9
upgrade on CD can view and refer to all of these booklets using Adobe or Acrobat
Reader. If you install all these
booklets as part of any TNTmips
product, you can access these booklets directly from the Display menu, by
choosing Help / Tutorial Overview and selecting the booklet, or via Help /
Search and use the index this provides.
Searching
with Indexes.
RV6.8
and RV6.9 provide Adobe PDF indexes
for direct access to all the TNT
products written material in the 76 tutorials and reference booklets and the
reference manual via your Adobe Reader. From
the index you could go directly to viewing that position in the PDF versions of
these documents in Reader. Unfortunately,
Adobe Reader for the Macintosh did not support using these indexes until the
recent release of version 6.x. If
you will now upgrade your free Adobe Reader to this current version for the
Macintosh you will be able to use this indexing feature illustrated in the
attached color plate entitled Searching with the Index in Adobe Reader.
New
Booklets Available.
Building Dialogs in
SML.
TNT?s
SML scripts continue to gain your
wider use within the TNT products.
As a result the reference material to support your use of this
TNT feature has been greatly expanded and divided into 2 booklets.
You still have available the booklet Writing Scripts with SML but
it has been completely revised and expanded to 64 pages.
Many new topics are now covered including the latest new features such as
how to use SML to communicate
between TNT products and your Visual
Basic programs. Since the material
in this topic area has been expanding rapidly, some of it was moved to a second
and new booklet entitled Building Dialogs in SML.
This new booklet is focused upon how to build your SML control dialogs
using either of the approaches: the older X Windows/MOTIF approach and the newer
XML approach. Please also note that
now you can also use your own Visual Basic programs to create forms and other
user interface components for your SML
scripts and to communicate with the TNT
products.
Orthorectification
Using Rational Polynomials.
Note:
This tutorial booklet entitled Orthorectification Using Rational
Polynomials was completed after the CD for RV6.9
was duplicated, therefore, you are being provided a printed copy.
As introduced
in the major section below entitled Rectification of QuickBird and
IKONOS Images, TNTmips now
provides a simple procedure to produce orthorectified images from full or
partial QuickBird and IKONOS images ordered as Rational Polynomial ortho ready
kits. This procedure requires a
good quality DEM of the area covered and several well distributed, accurate XYZ
ground control points (GCPs). This
new booklet covers the procedures available in this new process.
It outlines how to obtain and evaluate GCPs and test points of varying
quality. Significant modifications
of the TNT georeference procedures
were required to enter and use these XYZ GCPs and test points for this process
and are discussed. The various
methods built into this process to measure the map accuracy of the ortho images
produced are reviewed. A sample exercise is provided using a color IKONOS
4-meter resolution image of La Jolla Mesa, San Diego County,
California
and the corresponding DEM, which will
fit within the size limits of TNTlite.
The PDF version of this booklet and the sample data can be downloaded for
addition to your RV6.9 system from
www.microimages.com/getstart/rpcortho.htm.
Installation and
Setup Guide.
With so many
platforms supported by the TNT
products and constant changes in operating systems and network environments,
permissions, multiple users, it is difficult to keep this kind of material
current. It has to cover historical
situations such as W98 and older Linux flavors, language setup issues, as well
as a wide variety of network designs and user authorizations extending to
floating licenses. This completely
revised booklet reflects the latest information we can provide on this volatile
topic for the TNT products for
Windows, Mac OS X, Linux, and UNIX. Please
contact MicroImages support if you need further assistance in installing your TNT
product or configuring it on your network to meet your specific requirements.
Glossary for
Geospatial Science.
A MicroImages
client brought to our attention that we had a good glossary of terms but that it
was buried in an appendix of the reference manual and hard to get at and search.
Responding to this input this glossary has been extracted, reviewed,
updated, and reformatted into a standard reference booklet and is now indexed
and provided with the standard tutorial materials directly available from within
your TNT product via Adobe Reader.
Revised
Tutorials with Major Changes.
The following
tutorial booklets have been revised since the release of RV6.8.
They were selected for update since they represent areas of significant
recent changes in the TNT products.
The added functionality of newly released features is introduced by the addition
of new pages and examples as noted. As
part of this update their user interface illustrations, terminology, default
parameters, and sample data have also been adjusted to be current with RV6.9
of the TNT products.
Making
Map Layouts has the following new pages:
-
Toggling Tick
Mark Colors?how to toggle between two selected tick mark colors;
-
Using the
Placement Tool?general use of the new Placement tool including context
sensitive cursors, DataTips, and the right mouse button menu;
-
Other Legends
from Database Tables?making formatted text legends from a computed field
in a database table;
-
More on 3D
Groups in Layouts?more information about 3D groups in layouts including
how to insert them.
Creating
and Using Styles has the
following new pages on the new style assignment and style editor interface, and
other new features.
-
Symbols from
Other Sources?select symbols from another style object, from CAD, from CGM,
or from a TrueType font;
-
Are You
Exporting??presentation of style components that may not be supported by
external formats;
-
Symbols with
CartoScripts?modifying scripts and using different scripts to generate
point symbols;
-
DispParmView
and DispParmEdit?setting up and saving display parameters for use in
Spatial Data Display and the Spatial Data Editor.
Writing
Scripts with SML has the following new pages on debugging, batch processing,
and Tool Scripts.
-
Be Creative
with SML?outlines how to use different types of scripts to perform tasks
with differing levels of complexity and interactivity (Process, Macro, Tool, and
other types of scripts).
-
SML Debugger
and Script Timing?using the SML Debugger window to track down script
errors and analyze execution times for each part of the script.
-
Batch Import
with SML?provides an example of using SML
to automate a repetitive task such as importing hundreds of files with the same
format.
-
Sample Tool
Script?a sample Tool Script that can be used to select point, line, or
polygon elements (user choice via a dialog) from a vector object in a view.
-
Modify and
Extend Tool Scripts (I and II)?these 2 pages show how an existing Tool
Script can be modified to perform a different task.
Sharing
Geodata with Other Popular Products has been revised so that the exercise on
EPS focuses on converting to Adobe Illustrator?s native format (.ai).
The Database and ODBC exercise are changed from linking through the
import process to use of the new Link to Data Source feature in Spatial Data
Display. Additional new topics are
also introduced in the following new pages.
-
Convert
Layouts to PDF?conversion of a layout to PDF using the print process;
-
Direct Use of
MrSID, ECW, and JPEG2000?download JPEG2000 files from MicroImages? web
sight and display them;
-
Layouts to
SVG?download SVG viewer and convert a TNT
layout to an SVG layout using the print process;
-
Import/Export
Oracle Spatial?import and display vector objects from Oracle Spatial.
Using
TNTatlas for X Windows was updated to include additional features.
-
Customization
of the TNTatlas Interface?how to change what appears on menus and
toolbars;
-
Adding and
Using Custom Tools?how to detect if the atlas you are using has Macro
Scripts or Tool Scripts available, how to use them, and how to add your own;
-
Introduction
to the GeoToolbox?seamlessly switch from sketching to measuring to
selection to region generation;
-
Using the
TNTatlas as a Viewer?in addition to individual objects in the TNT
products? Project File format, TNTatlas
can directly view TIFF/GeoTIFF, JP2, MrSID, ECW, shapefile, and TAB formats.
Constructing
an Electronic Atlas has been updated to include additional features
available and the discussion of the TNTatlas Assembly Wizard has been expanded.
The additional new topics are introduced with the following new pages.
-
Custom Tools
with Layouts?include layout-specific tools for use with that layout only;
-
Launch
TNTsim3D from TNTatlas?use a Macro Script to launch TNTsim3D
from TNTatlas;
-
Additional
Map Scale Control Methods?discusses map scale control by group and by
element;
-
Customizing
the TNTatlas Interface?how to remove/add items to the menus and toolbars;
-
Permissions
for Use of Atlas Files?determine what TNT
products can be used to view your atlas and how the data can be used;
-
JPEG2000
Compression for Atlases;
-
Additional
Parameters for ATL Files?all parameters that can currently be specified in
a *.atl file, such as position, zoom, background color, and number of views open
when the atlas launches;
-
Launching
TNTatlases/W from CD?the steps necessary to autorun a specific TNTatlas
for Windows when the CD is inserted.
Navigating
has been updated to provide information on the Windows Desktop and other
interface / X Window System features. It
also has new pages on
-
Host System
Differences. MI/X
under Windows, the Windows Desktop interface, X11 with Mac OS X, and the X
Window System under Linux/UNIX all have slight variations.
More than ever the TNT
products now blend in with the conventions of the host operating system.
-
Geospatial
Catalogs. How to set up and use
GeoCatalogs, which let you visually select from thumbnails of project materials
that are in the same geographic location as your current view.
Floating
License Setup and Management Guide has been completely revised and now also
covers the topics of system configuration, installing the license manager, the
software license, and TNT?s
checking of the software authorization key.
Changing
Languages has been completely rewritten and simplified to reflect the
simpler procedures now available.
Note:
The following 3 booklets: Managing Relational Databases, Printing
and Vector Analysis Operations were revised with major changes after
the CD for RV6.9 was duplicated.
To use them, either install the latest PV6.9
or download the specific booklet.
Managing
Relational Databases has been updated to include shape objects and changes
to tabular view. The illustrations
were updated throughout the booklet and terminology was adjusted to reflect the
current interface and defaults. The
following new pages were added.
-
Changing
Related Only to Directly Linked?how to use directly attached tables to
make related only tables into directly attached tables so a database can be
simplified;
-
Database
Validate and Attachment Types?introduces database validation and discusses
importance and implications of various attachment types;
-
Link to ODBC
Data Sources?presents the Link to Data Source feature in Display and
contrasts it with linking during import; and
-
Many Ways to
Associate Tables?summarizes the many ways to associate database
information with objects in the TNT
products.
Printing
has been updated to include color management and newly supported external
formats. The following new pages
were added.
-
Color
Management?color profiles (ICM and ICC) and how to proof to the screen;
-
Printing to
External Formats?converting layouts to TIFF, Adobe Illustrator (*.ai), PDF,
EPS, and SVG;
-
Options When
Printing to SVG?compression and layer controls; and
-
Hints for
Reliable Printing?setting printer defaults and page orientation, do not
dither twice, printing transparency efficiently.
Vector
Analysis Operations has been updated to include material on creating and
using grids with accompanying exercises. The
following new pages were added:
-
Grid Analysis?generating
grids within reference objects;
-
Grids for
Extraction?using generated grids to extract from raster objects;
-
Grids and
Surface Properties?getting surface properties for generated grid polygons;
and
-
Vectors and
Surfaces?converting 2D vectors to 3D vectors and using 3D views in
editing.
Revised
Tutorials with Minor Changes.
Displaying
Geospatial Data has been updated to reflect interface and terminology
changes.
Georeferencing
has been updated to reflect interface and terminology changes.
Editing
Vector Geodata now covers the ?Undo? operation and has been updated to
reflect interface and terminology changes.
Editing
Raster Geodata has been updated to reflect interface and terminology
changes.
Theme
Mapping has been updated to reflect interface and terminology changes and
changes in the default naming procedures for theme map style assignment tables.
Using
TNTatlas for Windows has been updated to reflect interface changes for newer
operating systems.
Precision
Farming provides corrected links to web site references for use in obtaining
its sample data.
Technical
Characteristics has been updated to cover new operating systems, X Server
changes, localization, and the revised patching system.
Making
Image Maps now provides additional information on map scale controlled
display.
Translating
Tutorials had a minor change for clarification.
Main
or subsections preceded by the
asterisk ?*? symbol introduce
significant new processes or features in existing processes released for the
first time in TNTmips RV6.9.
System
Level Changes.
Miscellaneous.
Recovering
Project Files.
Project
Files in V6.8 and earlier used the
first 4-KB block to store the pointers to the location in the file of its other
components. When areas other than
the first 4 KB are damaged, Support / Recover Project File can often recover or
repair all or most of the contents of the Project File. However,
a Project File would have serious problems if any of the data if this first 4-KB
block was damaged in any way such as during writing, storage, reading,
transmission, or by some aberrant TNT
activity.
The
most common kinds of file damage occur at the start or end of a file when it is
read, written, or moved. RV6.9
duplicates the 4-KB pointer, or index block, at the beginning and end of the
Project File. Now, when any TNT
process accesses any Project File it compares these two blocks. If
they do not compare the process returns an error message to you that the Project
File may be damaged and that you should repair it using the Recover Project File
process. This process can often
determined which of the two blocks is correct by examining their contents and
comparing them to the actual data they point to in the rest of the file.
The recovery process can then reset both blocks.
Vector
Topology.
Element ID tables
can be optionally created or recreated when a vector object is validated.
A
toggle is available to optionally create an element ID table for corresponding
to the primary links to the attributes of a vector or CAD object.
Use this table with care as these element IDs are TNT
internal data and are altered by many TNT
processes. However, this table may
be useful to advanced users when it is made as the last step just before the
object is exported, used in an external process, or within SML.
Project File
Maintenance.
The Object
Information window now shows the cumulative cell count and cumulative area for
the histogram in addition to the cell count and area for each cell value
(Support / Maintenance / Project File / select
file / select raster
/ select histogram / Info).
This information is provided for floating point as well as integer raster
object types.
The
Object Information panel displayed for a raster object now includes the cell
size and scale computed from last-used georeference object (Support /
Maintenance / Project File / select file /
select raster / Info).
It also displays the date and time for the last modification of all
objects and subobjects.
Viewing Object
Extents.
The extents of any
selected object can now be viewed in the units and projection you choose in the
Object Extents dialog. If you
choose to display in latitude/longitude in this dialog, you can select the
preferred DMS format for their presentation. Any
or all of the text in this dialog box can be selected with the mouse and copied
to the clipboard so that it can then be pasted into Microsoft Word or some other
application.
DataTips.
The Z value for
contours will automatically become the default DataTip when the contours are
created in a TNT surface modeling
process.
Database
Attachments.
In addition to
raster, vector, CAD, and TIN, database records can now be attached to elements
in shape objects.
TIFF
Support.
The auto-link
system now allows linking to any TNT
supported TIFF file for direct use. V6.8
provided only links to grayscale and RGB. If
more than one ?image? exists in the TIFF file you will be shown its
hierarchy to select from. To permit
these and other improvements, the link file (*.rlk) created for auto-linked TIFF
files in RV6.9 can not be used in V6.8
TNT products or earlier.
If you wish to use V6.8 to
link to a TIFF file created in RV6.9,
delete its companion RLK file and autolink to it in V6.8.
Import or linking
now prompts for a single raster object for each TIFF file selected.
If the TIFF file contains multiple images, it then imports and puts the
multiple images into separate raster objects at the corresponding location.
As in V6.8, if the TIFF file
contains a single image, the raster object will be named the same as the TIFF
file. If the TIFF file has 3
components these raster objects will be named Red, Green, and Blue.
If it has more than 3 components they will be named Component 1,
Component 2, Component 3, Component 4, and so on.
This new approach for TIFF files is similar to HDF import and overcomes
the issue of having a potentially complex hierarchy that would require you to
individually enter the names of multiple rasters for a TIFF file containing
multiple components. In the case
that you do not want to use the automatically assigned names, you can edit them
after the link or import has been completed.
Export
now permits any number of same-sized numeric (signed, unsigned, or
floating-point) raster objects to be placed in the same TIFF file.
The option requiring you to specify 16-bit grayscale/48-bit color for
export to TIFF has been removed. These
types of raster objects are now automatically selected for export to TIFF if the
raster object?s data type exceeds 8-bits.
Export of a floating-point raster object to a floating point TIFF is now
also automatic. You can also specify the desired DPI setting during export to
TIFF as other image editing software such as Photoshop often expects this
information.
For
your future reference, RV6.9 uses an
earlier version of the TIFF libraries. DV7.0
will use the newest available TIFF libraries (V3.6), which support various new
additional features (which means, new tags) such as the ICM/ICC color profiles
described below in the section entitled Color Management.
DV7.0 also has added the
ability for you to view metadata for the TIFF file based on the contents of all
of the tags in the file.
JPEG2000
Compression.
Use for Internal
Raster Objects.
Wide Ranging
Use.
Lossless or lossy
JPEG2000 compression can now be used for raster objects in any TNT
Project File. You can choose from
lossless, lossy best quality, or specify a fixed lossy ratio in all the TNT
products, just as previously available in V6.7
and V6.8 for the creation and use of
external JP2 files. Now all of the TNT
processes support the use of internally compressed JPEG2000 raster objects while
continuing to provide full pyramiding and all associated subobjects and
characteristics. For example, you
can use JPEG2000 compression with a raster object from import through analysis,
resizing, reprojection, editing, mosaicking and on into a TNTatlas.
TNTserver can even send a JP2
compressed raster out to a TNTclient?all
of these uses will often be even faster than using uncompressed raster objects.
No Performance
Penalty.
Most of the
projects you undertake will be small enough that the time to compress a raster
into JPEG2000 will not be noticeable. The
use of JPEG2000 compression in connection with huge rasters of hundreds of
gigabytes will be discussed in detail in a section below.
After compression, there is no decrease in performance when these rasters
are decompressed in Display and other TNT
processes. In fact, they can often be displayed and accessed faster since the
amount of data to be read from your hard drive, CD, or DVD may be significantly
reduced for lossless and markedly reduced for lossy JPEG2000 compression.
Cautions for
Lossy Compression.
Care must given to
deciding where to use a lossy compressed raster object.
If you elect to use lossy compression to reduce your Project File size,
it may adversely impact your subsequent use of this raster object.
For example, applying lossy compression of any kind to an image and
discarding the original, larger image will prevent you from ever regaining
access to all the detail in the original image.
Never apply any form of lossy compression to a raster to which you intend
to apply further quantitative analysis?for example, never to multispectral
images that will be subjected to unsupervised or supervised automatic
interpretation. In other words,
know what you are doing and where you are going before applying lossy image
compression of any kind. However,
lossy compression has many benefits when used for its intended purpose for large
reductions in file size to conserve communication bandwidth or storage media
requirements for images in their final, user-oriented form.
Lossy compression is very useful to drastically reduce file size and
access time in a TNTsim3D, TNTatlas,
TNTserver, and at other steps in the
final publication of your result. For
example, a 40 GB raster object can easily be reduced to a 4 GB or smaller raster
object for use as an image reference layer in a TNTatlas
distributed on a DVD, which still leaves room for vector and database layers.
Benefits of
Lossless Compression.
The availability of
JPEG2000 lossless compression in the TNT
processes has much wider utility in potentially saving considerable space in all
the Project Files created during your analysis projects.
You will be surprised how much the application of lossless JPEG2000 can
reduce an image that has large areas of similar data values, limited data range,
or null areas (or all 3) with no apparent impact on TNT
performance and often improving processing times in subsequent uses.
An irregularly bounded image representing just the area of the
United States , such as the MODIS 24-bit world color
mosaic provided on your TNT Global
Reference Geodata DVD, yielded almost 7:1 lossless JPEG2000 compression.
Grayscale images or DEMs can yield equally good results since they
typically only represent 8 bits or less of variation or in 16-bit raster objects
seldom actually have data values that vary over the complete 16-bit range
(typically only 11 or 12 bits of actual data range).
In another case a mostly rectangular, 24-bit color, 30 centimeter
resolution (1 foot), image of Lincoln,
Nebraska yielded 3:1 lossy compression.
Some of this reduction was due to the fact that it was only available as
a lossy 10:1 MrSID compressed image, which permanently reduced its variability,
as well as having null areas over about 5 percent of the image.
A typical image (full rectangle, high resolution therefore high spatial
variability, and no null areas) will lossless compress only to about 2:1.
Georeferencing
using GeoJP2 Files.
Overview.
JPEG2000 was not
created specifically for use in remote sensing or other geospatial applications.
Thus its initial, official, adopted ISO definition providing the basis
for a standalone JP2 image file does not contain any specifications for
including georeference information. JPEG2000
compression now used internally for your MicroImages Project Files uses the same
georeference subobjects as for all other raster objects.
The ISO standard
does provide for the inclusion of custom blocks of information.
Thus various software developers can add and promote their own approaches
for georeferencing a standalone JP2 file. These
are being added to the TNT processes
as they are encountered. The
strategy of exporting and using an external auxiliary J2W georeference file in
the ArcWorld format was part of the initial TNT
V6.7 support of JP2 files. RV6.9
adds another method for directly embedding this information within the JP2 file
to create what is called a GeoJP2 file.
Last
Minute Addition: All comments
about GeoJP2 and Mapping Science in this MEMO are now subject to uncertainty.
LizardTech, the developers of MrSID and related products have now taken
possession of the assets of Mapping Sciences via a lawsuit.
For more information on this topic see www.lizardtech.com/solutions/ms/
or
www.mappingscience.com/msi.htm.
Concept.
There are
provisions in the official JPEG2000 structure for including custom information
(metadata blocks) in the standard structure of a JP2 file.
These blocks can be used for anything as long as they can be
automatically ignored by any and all other JPEG2000 compliant programs without
invalidating the ISO standard and the general use of that JP2 file.
A file that complies with these requirements is still a JP2 file.
However, software created to use JP2 files and to be aware of, and act
on, these custom blocks (for example, the TNT
products) can use them for their special purpose.
Mapping Science (www.mappingscience.com)
is promoting the use of a custom block to provide georeferencing for an ISO
compliant JP2 file thus yielding what they have named a GeoJP2 file (in parallel
to the naming extension of TIFF to GeoTIFF).
They use a specific metadata block in the JP2 file to store data blocks
that are actually a GeoTIFF file. This
GeoTIFF file has all its TIFF tags but only 1 raster cell.
When this metadata block is available, the TNT
products read the embedded GeoTIFF information and use them as the georeference
for that GeoJP2 file.
Implementation.
MicroImages JP2
files created since V6.70 are
accompanied by an auxiliary ?ArcWorld? (*.j2w) file of the same name as the
JP2 file. This strategy conforms to
the design first used by ESRI to georeference imagery and now used in many other
products. The disadvantage of this
approach is that the map projection of the data file is not included in the
ArcWorld file. Use of ArcWorld
georeferencing therefore requires that the user of the imagery specify the
projection when using the image.
The advantage of
the GeoJP2 file is that the georeferencing is embedded and can not get separated
from the raster contents of the JP2 file. It also provides a more complete
georeferencing including map projection and datum information.
TNT products can now directly
link to and use a GeoJP2 file and its associated georeference information.
The TNT Import process
converts this georeference information to the appropriate internal form.
The TNT Export process now
automatically includes the GeoJP2 metadata (as well as optionally creating an
ArcWorld file) so that the GeoJP2 file can be used by those programs that use it
for a georeference.
Performance.
MicroImages has
conducted 2 tests to determine how rapidly typical images can be compressed with
JPEG2000 in RV6.9 of the TNT
products using Windows. The results
of these tests are presented here. Both
tests were conducted on a computer with a single 2.4 GHz Pentium 4 processor
with 1 GB of real memory using Windows XP and a single, slow 4800 rpm hard
drive. Remember that in all large
raster to raster operations, using 2 drives (one as a source and one as a
destination) may yield better performance.
It is also important to continue emphasizing that once JPEG2000
compressed, these images can be viewed at any zoom scale in the TNT
products in 1 to 3 seconds.
140 MB Test
Image.
The smaller
test compressed a typical 140.4 MB image representing a 24-bit natural color
composite of a Landsat TM image of 7000 lines by 7000 columns.
The Raster Extract process was used to access the uncompressed raster
object and create the new raster object using JPEG2000 compression with these
results.
| ?
using lossless compression |
51.6 MB |
in 40 seconds |
| ? using
lossy best quality |
25.0 MB |
in 26 seconds |
Using the
Export to a JP2 file available in V6.8 and now for the export to a GeoJP2 the
times are:
| ?
using lossless compression |
51.4 MB |
in 40 seconds |
| ? using
lossy best quality |
24.8 MB |
in 43 seconds |
| ? using
10:1 lossy |
14.0 MB |
in 42 seconds |
A final test on
the same computer and a single hard drive used the Raster Extract process to
extract the entire typical 140.4 MB raster object and recreate it.
The new raster object was also not compressed and was placed in a new TNT
Project File. This required 40
seconds. From this and the above
compression tests you see that using lossless JPEG2000 compression to reduce
this test raster object to 1/3 its original size has no performance impact on
this or other TNT processes.
Futhermore, using the best quality lossy JPEG2000 compression where
appropriate would reduced the raster object to 1/6 its size and is actually faster (which means, 26 seconds) than writing the
uncompressed raster object
(which means, 40 seconds) since the compressed output raster object is smaller
and thus requires less time to write.
IMPORTANT
NOTE: A TNT
process that is acting on an uncompressed raster and applies JPEG2000
compression to the output can be faster since it is writing a smaller raster
object.
IMPORTANT
NOTE: Using JPEG2000
compressed raster objects can increase performance when they are read by a TNT
process since they are smaller.
2.78 GB Test
Image.
The larger test
used the uncompressed 2.78 GB version of the NASA MODIS color image on the
sample Global Reference Geodata on the DVD provided with RV6.9.
When you view this image you will note that most of its cells contain the
single uniform value representing the area of the oceans.
As a result the lossless compression is almost 6:1.
The Raster Extract process was used to access the uncompressed 2.78 GB
raster object and create the new raster object using JPEG2000 compression with
these results.
| ?
using lossless compression
|
484 MB |
in 34 minutes |
| ? using
lossy best quality |
358 MB |
in 32 minutes |
Using the
Export to a JP2 file available in V6.8
and now for the export to a GeoJP2 the times are:
| ?
using lossless compression |
481 MB |
in 34 minutes |
| ? using
lossy best quality |
357 MB
|
in 37 minutes |
| ? using
10:1 lossy |
267 MB |
in 37 minutes |
A final test on
the same computer and a single hard drive used the Raster Extract process to
extract the entire large 2.78 GB raster object and recreate it.
The new raster object was also not compressed and was placed in a new TNT
Project File. This required 116
minutes. From this and the above
compression tests you see that using lossless JPEG2000 compression to reduce
this test raster object to 1/6 its original size was about 3 times faster in
this and in other TNT processes when
lossless JPEG2000 compression is applied to the new raster object.
As you can see from these tests, any reduction in file size (linked or
for a raster object) is translated to performance increases due to reduced read
operations when using JPEG2000 compression.
Thus, subsequent TNT
operations that need to read this JPEG2000 lossless or lossy compressed object
would also be similarly faster using this smaller raster object.
IMPORTANT
NOTE: A TNT
process which is acting on an uncompressed raster and applies JPEG2000
compression output can be substantially faster since it is writing a
smaller raster object.
IMPORTANT
NOTE: Using JPEG2000 compressed
raster objects can significantly increase performance when they are
read by a TNT process since they
are smaller.
Improved Memory
Management.
Background.
The entire JPEG2000
concept is optimized for decompressing its result?this is what controls its
performance from the end user?s perspective.
The time-to-compress is the responsibility of the data preparer who is
usually not working under the same time constraints.
As a result, the fastest decompression possible is usually the goal and
slower compression will be accepted to meet this objective.
Compression takes
computer processor time and memory to hold the JPEG2000 compressed data as it is
built up from the specific data set. The
process can not simply look at the first part of the image and determine
a good model for the entire image. For
example, the corners of the image might be all nulls or water thus producing a
model unsuitable for the rest of the image.
Furthermore, the procedure can not sample the image to set up the
compression model. Sampling would
not correctly represent the high frequency and noise components of the image.
Sampling also requires reading a lot of a potentially very large image in
some other potentially cumbersome format.
What?s the
Problem?
MicroImages?
clients using V6.8 have not reported
reaching its memory limitation during JPEG2000 compression into JP2 files.
However, a number of clients are now discussing the capability of the TNT
products to apply JPEG2000 compression to huge images in the range of 50 to 300
GB on 32-bit Windows based platforms using XP, 2000, and 2003.
Examples of these kinds of projects are:
-
a client?s
mosaic of several hundred, 1-meter ortho-IKONOS images;
-
DigitalGlobe?s
15 meter, 24-bit color Landsat mosaic of the United States
of 275 GB; and
-
a
30 centimeter (1 foot) USGS color image mosaic of Lincoln, Nebraska and
surrounding area of 1350 square kilometers (525 square miles) and 47 GB.
These projects are
all testing the robustness of the TNT
JPEG2000 compression implementation as well as other TNT
processes, such as mosaic. MicroImages
has used two of these as internal projects to determine how big of an
image/raster can be handled in TNTmips
and how fast.
Current
Approach.
Fortunately
the developer of the Kakadu JPEG2000 libraries used by MicroImages and many
others for the JPEG2000 compression and decompression has released a new library
with even better memory management (www.kakadusoftware.com).
This has been incorporated into RV6.9
and is more conservative on the use of memory than the previous Kakadu library
used in V6.8.
The impact of the amount of real and virtual memory available to this
decompression process will be discussed in detail below as it is significant for
massive images.
Exceeding
the real and virtual memory permitted by Windows during compression can not be
predicted in advance and may even cause Windows to crash.
Thus, the TNT compression
control dialog now lets you set the maximum real memory limit to be used during
compression and correspondingly slows it down.
To further improve the handling of huge input images, an option to choose
?Automatic? progression order has been added to the control dialog and is
the default. It will keep your
operation within your specified memory limitations.
To avoid confusion, this setting and others used to control the advanced
technical characteristics of TNT?s
JPEG2000 compression have been moved to an Advanced Settings window and
appropriate defaults provided.
Why Create Single
Massive Images?
Avoid Edge
Effects.
Why make these
huge, massive single images that are far bigger than any media on which they can
be readily distributed and are also impractical for movement on the Internet or
a LAN? Why not simply chop the
large images into smaller geographically related tiles that fit on multiple CD
or DVD media. The issue is that
JPEG2000 lossy compression (and we would assume those of MrSID and ECW) can not
produce the identical losses when applied separately to pieces of a larger
image. Consider the case of
applying 10:1 lossy compression on 2 adjacent tiles making up a larger image
where one has a very large uniform color lake and the other does not.
The result is that the amount of detail lost in the image containing the
lake for a 10:1 lossy compression will not be as significant as those in the
10:1 lossy compression of the tile containing only detailed land features.
Thus, when you apply fixed ratio lossy or best quality lossy, the detail
lost will vary slightly from piece to piece of your total image.
Thus, when these tiles are mosaicked or displayed together, horizontal or
vertical seams will show as the human eye is a wonderful comparator and detector
of lines.
At some future
time, it may be that the developer?s of the JPEG2000 compression libraries
will make provision for somehow developing a lossy compression model for the
entire large area of interest from pieces and then permit it to be applied to
these tiles one-by-one. However,
this may not be possible as JPEG2000 lossy compression works on removing spatial
noise and high spatial frequency features.
As a result, any kind of sampling of a large area can not be used.
Correspondingly, if a small subarea is used to develop the compression
model it might arbitrarily hit a lake or a null area.
Anyway, those developing and promoting JPEG2000 are not yet that
concerned with handling huge images and images that are not rectangular (which
means, irregularly bounded or contain null areas)?these are common
characteristics of the images used in remote sensing and geospatial analysis.
Recommended
Procedure.
With these
limitations in mind, the most appropriate approach to this problem, since you
own a TNTmips, is to keep your
original image in a lossless condition until it is mosaicked.
This means you will be starting with many large images and ending up
mosaicking it into a huge single image. Next
use the Raster Extract process to copy the entire mosaicked image to a lossy
JPEG2000 compressed raster object. Then
use this single lossy compressed raster object or cut it into the appropriate
pieces for distribution. Alternatively
you could export the lossless mosaicked image to a single large or multiple
smaller pieces in lossy JP2 or GeoJP2 files.
By these steps you can produce a JP2, GeoJP2, or TNT
raster object that will fit on CD or DVD and will not produce visible seams and
artifacts at the joining edges.
At
this point you might wonder why not simplify the above procedure by adding
JPEG2000 compression to the Mosaic process.
Eventually this will be provided, however, it would not save any drive
space. The large, lossless raster
would still have to be created by mosaic as a temporary file on your drive so
that the JPEG2000 compression could be applied to the total image for the
reasons outlined above.
Testing Memory
Management.
MicroImages has
conducted 2 tests to determine how large of an image can be compressed with
JPEG2000 in RV6.9 of the TNT
products using Windows. The results
of these tests are presented here. Both
tests were conducted on a computer with a single 2.4 GHz Pentium 4 processor
with 2 GB of real memory, Windows XP, and about 1 terabyte of hard drive space.
The smaller test
compressed a 46.6 GB image and did not ?go virtual.?
It built up and kept the JPEG2000 compression model characteristics
within the available real memory and completed in 3 to 4 hours.
The test using a 275 GB input image did ?go virtual.?
It constantly swapped data in and out from virtual memory and took 2
weeks to finish, but it did finish! While
the ratio of the sizes of these input images is about 5 to 1, the time to
compress differed by about 100:1. Observations
of Windows memory management during the 275 GB tests indicated that about .5 GB
of additional real memory would have been required to complete this task in 20
to 24 hours. This is just about the
proportion of the 2 GB of real memory that was being used directly by Windows
and was not available as real memory to the TNT
process.
47 GB Test
Image.
The following test
case completed within the 2 GB total memory limitations of Windows XP, 2000, and
2003. During each of the tests it was observed that the critical compression
activities stayed within and used the available real memory.
The test used a
24-bit color composite image of an area of 1350 square kilometers (525 square
miles) centered on the city of Lincoln,
Nebraska .
This image was collected in the spring of 2002 by a camera for USGS in
several hundred frames and then orthorectified and mosaicked.
Its ground resolution is 30 centimeters (1-foot) yielding 145,000 lines
and 115,000 columns.
This image was
purchased by MicroImages and was provided as 4.66 GB delivered on 2 DVDs in many
MrSID tiles using lossy 10:1 compression. These
MrSID images where batch imported by TNTmips
into decompressed raster objects and mosaicked yielding a single raster object
of 46.6 GB. The Raster Extract
process was then used to compress this raster into a new JPEG2000 compressed
raster object with these results:
| ? using lossless compression |
15.36 GB |
in
3 hrs 45 min |
| ? using lossy best quality setting |
8.42 GB |
in
3 hrs 10 min |
| ? using 12:1 lossy to fit on a single DVD |
3.97 GB |
in
2 hrs 45 min |
The
46.6 GB raster object was also exported to a JP2 file as follows.
| ? using 12:1 lossy to fit on a single
DVD |
3.88 GB |
in
3 hrs 41 min |
Decompression of
this large 46.6 GB raster object or any of the JPEG2000 compressed test results
is not memory or size dependent. Viewing
any of these test results (which means, raster objects or linked JP2 files) at
any scale in a TNT product requires
1 to 3 seconds from a hard drive and from 1
to 6 seconds from DVD in a 16X reader.
Side by side comparisons at full 1:1 zoom shows that the original 10:1 MrSID
images are almost identical to the 12:1 JP2 file.
275 GB Test
Image.
The following test
case was also completed within the 2 GB total memory limitations of Windows XP,
2000, and 2003. Although the test
platform had the maximum of 2 GB real memory, Windows and associated system
operations captured and was using at least .5 GB of this real memory.
Thus, during each of these tests it was observed that the critical
compression activities required more than the available real memory and
extensive use of virtual memory was needed.
The test used a
24-bit color composite image of the area of the 48 ?lower? or conterminous
states of the United States
(only the area inside the borders of
the United States excluding
Hawaii and
Alaska ).
The large areas of Canada,
Mexico, and the Oceans inside the rectangle
bounding this area are represented by null cells.
The image represents thousands of mosaicked Landsat frames with a ground
resolution of 15 meters. It is the
property of DigitalGlobe and is used in assisting their clients in locating
ground areas of interest for ordering other products.
The image was provided on a hard drive in 2 by 2 degree uncompressed TIFF
files. It was mosaicked in TNTmips
for this test into a single raster object of 180,393 lines and 513,699 columns.
The Raster Extract process was then used to compress this raster into a
new raster object with these results:
| ? using lossless compression |
81.5 GB
|
in 313 hrs |
The
275 GB raster object was also exported to JP2 file as follows:
| ? using lossless
compression |
81 GB |
in 283 hrs |
| ? using 100:1 lossy compression |
2.5 GB |
in 23+ hrs |
Decompression of
this huge 275 GB raster object or any of the JPEG2000 compressed test images is
not memory or size dependent. Viewing
any of these test results (which means, raster objects or linked JP2 files) at
any scale in a TNT product requires
1 to 3 seconds from a hard drive and from 1
to 6 seconds from DVD in a 16X reader
for the 100:1 compressed JP2. Side by side comparisons at full 1:1 zoom and
larger show that the original TIFF images do contain high resolution details
that are slightly smoothed in the 100:1 lossy compression, but that the result
is still very satisfactory for use as a reference background on DVDs for many
other kinds of geospatial overlays of the ?lower? United States.
This would include reference to individual property parcels.
Discussion.
It is now
appropriate to consider how this 275 GB JPEG2000 compression test will perform
if more real memory is made available. Alas,
every 32-bit version of Windows is limited to addressing 2 GB of real or real
plus virtual memory. It does not
matter if it?s all real memory or some of it is virtual memory on the hard
drive. As a result, adding more
real memory on the motherboard of a 32-bit Windows machine is not a workable
approach if you want to compress images exceeding about 200 GB in size in TNT
products and are unwilling to wait weeks. However,
TNT products are now available for
several different 64-bit platforms, which use more real memory and are also
inherently faster. The following
larger memory options are available for use with the new 64-bit versions of the TNT
products for this and other large projects.
64-bit Linux on
AMD Opteron. Linux kernel
V2.4.0 supports the use of 16 GB of real memory.
The newly released kernel V2.6 supports the use of 64 GB of memory on
32-bit processors and 1 TB on 64-bit processors.
MicroImages has available a platform using dual AMD 64-bit Opteron
processors (US$2000 without a monitor) and 8 memory slots each capable of using
up to 2 GB memory modules. The
platform comes with SuSE 8, which uses kernel 2.4 and will address up to 16 GB
of real memory (SuSE 9 for kernel 2.6 and 1 TB memory support is now also
available). Arrangements are
underway to increase the memory of this test platform beyond the original 1 GB
so that the 275 GB compression test can be rerun without going virtual.
With 16 GB of real memory, this platform should permit the TNT
products to apply JPEG2000 compression to a terabyte image although this would
take several days.
64-bit Solaris
on SPARC. A Sun SPARC platform
running Solaris 8.x or 9.x with adequate real memory would be ideal for TNT?s
JPEG2000 compression of the 275 GB or larger images.
All but the entry level Sun platforms using Intel processors would have
more real memory and thus are excellent choices for compressing 275 GB or even
larger images.
64-bit Linux on
AMD Athlon. SuSE 8 and the beta
of Windows 64-bit XP run well on the AMD Athlon F64 and F64X platforms (US$800
without a monitor). However, only a
few motherboards are available for these chips at this time and all seem to
limit real memory to 2 GB.
64-bit beta
Windows XP on AMD. The TNT
products are available for use with this 64-bit version of Windows on both the
64-bit AMD Opteron and Athlon based platforms.
At this time MicroImages can not report upon the maximum real memory
supported by this beta Windows. However,
the hardware available to run this Windows is the same as that outlined above
for use with Linux so the same practical limits would apply (16 GB for Opteron
and 2 GB for Athlon) until new motherboards are released.
64-bit Mac OS X
on G5. You can populate the
mother board of the latest Apple G5 dual processor platform with up to a maximum
of 16 GB. Some Mac OS X operating
system actions will utilize this much memory.
However, V10.2.3 of Mac OS X still uses 32-bit libraries, which the TNT
products also share. This limits
each TNT process to a maximum of
32-bit or to using 2 GB real or real plus virtual memory.
64-bit beta
Windows XP on Itanium II. The TNT
products are not yet available for this version of Windows on the Itanium II
platform. Current Itanium II
desktop platforms and this version of Windows cost about US$10,000 and are not
worth this price on a comparative basis.
DV7.0?Faster
Massive Files.
Just as this MEMO
was being printed, a new Kakadu JPEG2000 library was released that provides
MicroImages the basis for testing further memory management improvements.
These should provide the basis for improving the performance of TNTmips
for compressing massive images on computers with 2 GB or less of real memory and
within the memory management limitations of 32-bit versions of Windows.
Use of JPEG2000 in
?Photo? Viewers.
NOTE:
Few software products, image viewers, and plug-ins claiming to use JP2
files can handle large JP2 files.
Size
Limitations.
Popular application
software (for example, QuickTime, Preview, and Photoshop) can not handle JP2
files of much greater than 2 GB. Most
viewers and plug-ins for browsers also do not support large JP2 files for
similar reasons. It appears they
all decompress the JP2 file into memory, probably because they do not support
pyramided rasters for rapid direct access or use a JPEG2000 library optimized
for this purpose. In many of these
products, selecting a large JP2 file causes a memory overflow and they crash or
slow down so much that the program is effectively hung.
Doubly bad is that you will get no warning when you select these large
files since the product does not check and predict in advance how big the
uncompressed image will be.
Determining
Which Product is Responsible.
Improper behavior
of TNT produced JP2 and GeoJP2 files
in other software does not mean that there is a problem in the TNT
produced JP2 or GeoJP2 file. TNTatlas
can be used as a viewer to link to, and view any correctly formed JP2 and GeoJP2
file or JPEG2000 compressed TNT
raster object of any size. The
creator of the Kakadu library used by MicroImages and many others provides a
FREE standalone JP2 viewer. It can
also be used to view and verify that a JP2 and GeoJP2 file of any size is
correctly structured according to the JPEG2000 options and standard.
This viewer is kdu_show.exe and can be obtained by downloading the WIN32
Executables along with its manual from www.kakadusoftware.com/downloads.html. Mapping
Science also provides a free MSI Viewer that can deal with very large JP2 or
GeoJP2 images. It can be downloaded
from www.mappingscience.com/msi.htm.
Compressed
Images from Personal Cameras.
A few personal
cameras will optionally save pictures as large, lossless TIFF files.
More cameras can optionally save pictures into large lossless RAW rasters.
Unfortunately, these RAW (*.raw) pictures are in fact the ?raw,?
uncompressed data collected by the imaging sensor.
Thus, the RAW format is not standardized in any way and can even vary
from model to model from a single manufacturer and can only be used in each
camera?s proprietary off-line image enhancement and processing software.
There is also a discussion underway in that industry that the best way to
standardize the access to the RAW formats is by using a manufacturer supplied
TWAIN driver, the same TWAIN driver concept used in standardizing access to
scanners.
The
use of a lossless format in personal cameras is not of immediate wide scale
interest since they create such big files.
The camera?s flash memory could only hold a few lossless images.
It also takes quite a long time after the image capture to write these
large images into the flash memory. MicroImages
has not yet encountered any personal digital camera that will save pictures into
the standard lossy or lossless JP2 file. This
seems to be only in the discussion phase because fast JPEG2000 compression takes
more computing resources than JPEG, which is simple and well entrenched.
The replacement of JPEG and PNG as a replacement for the storage and
archival of photographs is well underway. A
good place to start reading about this is the short article JPEG2000:
the Killer Image File Format for Lossless Storage.
Ken Milburn. 11/2003 at www.oreillynet.com/pub/a/javascript/2003/11/14/digphoto_ckbk.html.
MicroImages would
be pleased to hear from you with further information on this topic.
2D
Display.
Antialiasing.
Thin lines (for
example, 1 screen pixel wide) can now optionally be drawn with antialiasing
and/or hinting applied. By default
both these options are on and can be toggled off/on for all new views on the
Display/View Options dialog on its View tabbed panel.
Both options can also be independently toggled off/on for each view using
the View window?s Options menu. A
color plate (1/2 page) is attached to illustrate these effects and is entitled Antialiasing
and Hinting of Thin Lines.
LegendView.
Unnecessary
vertical spacing has been removed from the LegendView so the individual entries
are closer together. Now more
legend entries are visible without requiring scrolling of the LegendView panel.
Installing
Scripts.
In V6.8
the SML Macro Scripts you installed
to act on your current view or for any other analyses were added to the menu bar
as named entries under the heading Macros.
They were also automatically added to the view?s icon bar as a new icon
using the same name as their ToolTip. Some
of you have added many Macro Scripts resulting in so many icons that the icon
bar became confusing. In these
circumstances, omitting the icons and providing access only via the cascading
Macros menus is recommended. In
RV6.9 during installation of a Macro
Script, you have the option of omitting the icon.
This and other adjustments to the process of gaining access to the SML
development tools and installing Macro Scripts is discussed and illustrated in
the attached color plate entitled Macro Script Setup.
Linear Transparency
in Color Palettes.
In V6.8
you could manually set the transparency of each color cell by cell in a new or
existing color palette in the Color Palette Editor dialog.
This can be tedious if you want to create a color palette with a smoothly
varying transparency from a low cell value to high or vice versa.
This dialog now lets you select the transparency values for the upper and
lower cell values in the palette or any intermediate pair of cell values.
You can then specify that you wish the transparency of the color of all
the intermediate cell values to vary linearly between your selected end
settings.
An example of the
use of linearly varying color transparency would be to assign a color palette
with high transparency (95%) to the low elevations in a DEM and low transparency
(40%) to the top of the mountains. When
this DEM is overlaid onto a grayscale image of the same area the complex land
cover detail will show in the flat areas without color and the mountains are
color coded in altitude with increasing opacity.
This sample application is illustrated in the attached color plate
entitled Automatically Vary Transparency in Raster Color Palettes.
Zooming Directly to
Positions in a View.
A new Zoom to
Location icon on the View window?s toolbar provides a Zoom to Location
dialog to enter the coordinates to reposition the center of the view.
These coordinates can be in any of the coordinate systems or projections
supported by the TNT products.
You can also specify the zoom at the new position by setting a scale or
the height or width of the view in the selected ground measurement.
These new capabilities are illustrated in the attached partial color
plate entitled Zooming to a Specified Location.
Dual Coordinate
Readout.
The automatic
coordinate readout at the bottom of every View window now, by default, displays
coordinates in two systems. By
default these are set to display coordinates in UTM and latitude/longitude.
You can independently toggle off/on either of these position reports,
select their projections, set their units, and choose the desired
latitude/longitude (DMS) format from the View window?s Options menu.
This dual readout is illustrated in the attached partial color plate
entitled Viewing Two Position
Reports.
* Adding Custom
Tools to Groups and Layouts.
Background.
Groups
and layouts are used to define and maintain the relationships between
collections of TNT geodata objects
in various Project Files or linked external files in other formats.
Creating a group or layout organizes a collection of specific geodata
into a meaningful unit and defines how it is used together in a display group or
layout, a map layout, an edit session, in an atlas, and so on.
V6.8 permitted you to add
these SML tools (Macro and Tool
Scripts) to every View window where they are always presented until manually
removed. This approach is suitable
if your special tools can be generically applied to any view, or at least to any
raster object, any vector object, and so on.
Implementation.
SML
can also be used to create complex analysis tools that are dependent upon one or
more objects in a specific group or layout.
To streamline this application of scripts for data dependent custom tools
in RV6.9, Macro Scripts and Tool
Scripts can now be stored with a specific group or layout.
When that group or layout is selected these associated scripts are added
to the menu bar for your use or additionally as an icon to the toolbar in the
View window. Unlike the previous
approach, which is still available, these menus and icons are not always part of
the view. They are only presented
when the group or layout to which they are attached is displayed.
These new procedures for managing and applying custom tools are
illustrated in the attached color plate entitled Use Layouts to Customize
TNTatlas/X.
Delivering scripts
with the layouts is a particularly effective way to add unique and data
dependent tools related to the specific contents of an atlas.
When they are part of the atlas layout, any copy of TNTatlas
software which accesses and works with that data content will also have its
special tools. This is even the
case if the platform using the TNTatlas
varies from Windows, to Mac OS X, to Linux since these embedded tools are
written in SML, which is a platform
independent feature of all TNT
products. This is also illustrated
in the attached color plate entitled Use Layouts to Customize
TNTatlas/X.
A good example of a
data dependent tool is a Tool Script created in SML
that makes use of the attributes of a particular vector layer in the group.
The Tool Script implements an SML
query for the element selected in the script by the cursor.
It might combine and test data from a number of attributes and return a
variety of results. While this Tool
Script can be very useful, it is data dependent and, thus, works best if
associated with a group or layout that loads the selected layer(s) for which it
was designed. This type of data
dependent Tool Script acting on selected element?s attributes is used for the
examples in the attached color plate entitled Modifying SML Tool Scripts for
New Applications.
Reusing Your
Tools.
Using a template
can be a convenient way to assemble geodata into a series of similar layouts,
for example creating a series of maps. By
design these new layouts will usually contain similar geodata layers merely
covering a different ground area. As
a result specialized scripts can be developed to provide tools for use with the
specific data in the master layout and then transferred as part of the template
process to each new daughter layout in the series.
Data dependent
scripts can also often be easily modified to operate on quite different data
layers. A script providing a tool
used to query attributes attached to elements in one vector object can be easily
modified to change both the query and the attributes.
For example, a Tool Script designed to locate and select streets by name
can be adapted with a few simple changes to select cities by name in an entirely
different area and vector object. These simple changes are illustrated in the
attached color plate entitled Modifying SML Tool Scripts for New
Applications.
DV7.0?Styled
DataTips.
Add Styles.
DataTips can now be set up to use the same TNT
embedded text style codes as any single text layer.
You can use these style codes to greatly enhance the appearance of your
DataTips. Just a few of the styles
that can now occur in your DataTips are control of the frame fill color and each
text element?s color. The text
element?s font, size, rendering style (bold, italics, outline,?) can also be
set and rendered in the DataTips. Tabs
can be set and text alignment set to left, right, center, and justify.
Advanced
Applications. Using these style
codes you can add a new kind of interactive information flow to your DataTips.
For example, the floodplain zone attribute for hidden polygons can be
used in a virtual field to set the background color of the frame for the DataTip
to be light green, yellow, or red. Using
this approach the DataTip could show a property?s ownership information in
text and use the frame?s background color to provide an alert of its
floodplain zoning (for example, light red means subject to frequent flooding).
Using a similar approach, even the color of a text element retrieved from
a field can be varied using a virtual field.
Color Plates?
DV7.0 already supports using
these new Styled DataTips. Illustrations
of this application will be some of the new color plates posted on
microimages.com.
DV7.0?GraphTips.
What Are They?
Those clients who have prepaid for RV7.0
can download DV7.0 and gain access
to a new and innovative 2D visualization feature.
2D views can now be set up to use Graphical DataTips or GraphTips for
short. GraphTips pop in and out
just like the DataTips, ToolTips, and HelpTips.
How Did They
Evolve? The idea for GraphTips came about from discussions of how to provide
you with the styled DataTips also now available in DV7.0.
Eventually this discussion lead to the realization that it was not
difficult to provide you with a means to set up something that behaves like a
DataTip but is graphical in design. It simply uses the same familiar SML
scripting language, including queries, to pop in a graphic based upon attributes
of the nearest feature. Conceptually
it permits your information flow designs to go beyond complex, multiline
DataTips that use virtual fields to model the values presented.
Now you can also create a script that will use simple drawing tools to
draw and popin GraphTips.
Why Are They
Needed? If you create a large
paper map product, TNTmips provides
you with a variety of means of drawing graphical pins all over it.
A large map can easily portray hundreds of symbolic pins and graphs each
representing the information at a point. Pie
diagrams and histograms are simple examples and CartoScripts provide the basis
for many others.
GraphTips are the
interactive equivalent of the printed pin map.
When your product is interactive, such as a display layout or a TNTatlas,
the pin map approach is usually limited to a few points and variables in the
view and requires careful control of layers by scale.
If you are zoomed out with relative scaling set for your pins they can
become too small to ?read.? If
you are zoomed out with absolute scaling set for your pins, you can only use a
limited number or they will cover everything else in the viewing area.
If you are zoomed in with relative scaling set, the few pins in the
current view can obscure it.
Clearly pin mapping
has limitations in a highly interactive, user-driven display oriented to
visualization. Pin mapping can use
interactive view scale control features but is still limited in the complexity
and number of pins you can provide. The
ultimate control is thus to print the map, which fixes the relative scale of the
features and layers it provides. GraphTips
allow you to present complex information at any view scale for every point in
the view, to control how it will relate to the image and map features in that
view, and to dynamically present changes in the database information being used.
Examples?
Some examples of the easily understood and easily created GraphTips will
assist you in understanding this new and powerful visualization feature.
A GraphTip could be a color pie diagram of the 4 principle ethnic groups
(which means, 4 fields) of a city with its radius determined by the population
from another field. To see this
diagram for any city you simply move the cursor to each city.
This is a very simple GraphTip script.
Another GraphTip
could use a group with a minimum of 2 layers: an image layer and a DEM, which is
not visible. A GraphTip could then
pop in for any point in the view with a viewshed overlay computed out to some
radius from the DEM layer. This
GraphTip is a simple example of interactive visual information flow for every
point in your view that can not be done with TNT?s
previous pin mapping capabilities. Since
a viewshed function is available in SML,
this is also not a complex script. A
radius is used to limit the extent of the computation for the viewshed so that
the GraphTip can be responsive. A
wide range of types of GraphTips can be implemented across platforms using the
extensive, spatially-oriented tool kit provided by the functions and classes in SML.
Color Plates?
DV7.0 already provides this
new GraphTips procedure. Illustrations
of this application will be some of the new color plates posted on
microimages.com.
*
3D Display.
Considerable
effort continues in this process as 2 more advanced terrain rendering models are
available: Dense Ray Casting and Variable Triangulation.
Both these models provide better rendering speed and quality than any
earlier models. While the available terrain models now total 7, older models
such as ray casting are being superceded by these newer, better models and will
eventually be purged, probably in DV7.0.
However, at this time these older methods still provide some optional
features not yet implemented in the newer methods such as transparency and
relief shading.
All 6 of the
texture filters for raster drape layers can now be selected for use with any of
these 7 surface rendering models. The
combination of the texture method and terrain model you select will control the
speed and quality of the rendering. Please
refer to the corresponding section in the release MEMO shipped with V6.8
for more details on the other earlier new models and all the new texture
rendering methods (see www.microimages.com/relnotes/v68/rel68.htm).
Dense Ray
Casting.
Dense Ray Casting
uses dense triangulation to render the foreground of the view and ray casting
for the background area. This
hybrid method has good performance and the best terrain rendering, and you do
not see the transition between methods. Choose
the MipMap Anisotropic texture rendering method from the Raster Layer Controls
dialog for the best quality, but it will have an impact on rendering speed.
Variable
Triangulation.
Just as its name
suggests, this terrain triangulation method varies the size of triangles from
the front of the view to the rear as the relationship between terrain detail and
screen pixel size changes. In other
words, small triangles are needed in the foreground to represent topographic
details that are rendered over many screen pixels.
In the background, these fine details may be compressed into a single
screen pixel, and would not be visible, so larger triangles can be used there to
represent a more generalized model of the terrain.
In this fashion, the number of triangles needed to render the terrain is
optimized to produce the best possible representation of the terrain based on
cell size and the current viewpoint. Combining
this surface rendering mode with the MipMap Anisotropic texture filter option
yields the fastest performance of any of the current TNT
3D rendering methods with good quality terrain rendering.
DV7.0?Faster
Variable Triangulation.
Repositioning
a view in V6.9 using variable
triangulation is fast because at startup it creates a temporary file with all
needed triangles at the most detailed level.
It then extracts the variable triangular structure from these small
triangles for each new viewpoint. This
temporary file is not saved. V7.0
will improve this aspect of using this terrain model by introducing a procedure
that computes and saves these triangles in a pyramid like structure.
This will not only improve the overall performance of this method when a
3D view is opened, but will also reduce the computations needed to compute the
variable triangulation model for each new viewpoint.
Much of the effort in 3D in recent TNT
releases has been pointed toward improving 3D performance so that concurrent TNT
2D and 3D views can be used together more interactively, especially when
combined with the new manifold display features.
DV7.0?Manifolds.
What Are They?
Those clients who have prepaid for RV7.0
can download DV7.0 and gain access
to a new 3D display capability. 3D
displays can now contain manifolds, which are curved 2D surfaces rendered in 3D
space. A dictionary?s
mathematical definition of manifold is ?a
topological space that is connected and locally Euclidean?.
What Are They
Good For? The practical
implications of adding support of manifolds in the TNT
products is that you can now display these curved surfaces in their proper
spatial orientation using any viewpoint for your TNT
3D view (for example, seismic profiles as raster objects or geologic cross
sections as vector objects). These
manifold surfaces can not only represent sinusoidal profile shapes that curve in
2 dimensions (for example, profiles in X and Y) but also surfaces that curve in
all 3 dimensions such as spherical, folded, overlapping, or intersecting.
How Are They
Created? To provide this new
capability, the Georeference process has been extended again to provide you with
the ability to add manifold georeferencing to an object.
The georeference points you enter define the XYZ curvature of the object.
The 3D view uses this georeference to drape this layer into the 3D view
along with any of the other layers you could add in RV6.9.
The Spatial Data Editor has also been extended to permit you to modify
the flattened 2D view of the object while viewing the results in 3D.
Color Plates?
DV7.0 already provides this
new surface rendering. Thus, by the
time you read this, the first color plate(s) will be posted illustrating some
manifold surfaces at microimages.com. These
examples will be simple as MicroImages as yet has very limited access to
suitable geodata of this type.
Label
Frames and Leaders.
Using the
appropriate styling for labeling features can be an important design decision.
At one extreme where you wish to emphasize the labels over other content
you can choose an elliptical shape, bright background color, larger lettering,
triangular (which means, balloon) leaders.
This design draws the readers? attention to the labels over other
content. On the other hand, when
content is important you can use outline boxes that are transparent or lightly
color shaded to help you, or your client, locate them only when needed in a
complex image background. We are
all also familiar with the traditional labeling of conventional maps whose
simple text labels are borderless, vary in size, and may follow the feature or
refer to it with a leader line.
Earlier versions of
the TNT products have introduced a
variety of features to improve labeling including a new multilingual text
editor, easily controlled text and font styles, text placement, language
selection, and so on. Now in RV6.9
you can add styled frames and leaders to enhance your final products.
These new label options are introduced below and the use and appearance
of some are illustrated in the attached color plate entitled Label Frames and
Leader Lines.
Frame Box.
Shapes.
Use frames that are
rectangular, rounded rectangular, circular, or elliptical.
Margins.
Independently set
the top, bottom, right, and left margins as a % of the font size to control the
size of the frame and position the label?s text in any language anywhere
within its frame. Descending and
ascending characters, multiple diacritical marks above and below, and other
language specific characteristics are automatically handled in setting a default
for all 4 margins.
Styles.
Set the style of
the outline of the frame for the label to determine the thickness and color or
omit the outline.
Fill.
Choose
a fill color and transparency for the inside of the frame to provide a
background for the label text.
Limitations.
All labels for a
specific element type within a layer must have the same frame style.
You may wish to mix frames and other label display characteristics (for
example, have more than 2 colors of frames).
To do this, extract the data for the various groups of features to be
labeled into separate objects and set up separate label styles for each object.
Then add the original object and each of the extracts displaying labels
only for the extracted pieces.
Leader Lines.
Type.
Select
the leader line type from simple or triangular (balloon) lines.
They connect to a point that is always located inside the polygon and not
its centroid, which may not be inside the polygon (for example, a ?U? shaped
polygon). Both types of leaders
connect from that point to a position somewhere along the nearest side or end of
the frame and not to corners.
Styles.
When
line leaders (not triangles) are used, they are rendered in the same fashion as
other TNT lines.
Thus, they can have varied width and color.
Position.
As
displayed the labels and frames can be automatically positioned in or out of
their polygons with the following options:
-
Always Inside,
-
Fit Inside or
None (no label if its frame does not fit inside its polygon),
-
Fit Inside or
Outside with Leader (outside only if its frame will not fit inside the polygon),
and
-
Fit Inside or Outside without Leader (outside only if its frame will not fit
inside the polygon).
Shape
Object.
The appearance of
this new geospatial object structure in the TNT
products should be considered as a work in progress.
It does appear as a new primary object type right along side raster,
vector, CAD, and the others. However,
unless you are working with Oracle Spatial you will have no direct use of it in RV6.9.
Why Is it Needed?
In addition to CAD
and vector data structures, spatial data is now being stored in Relational
Database Management System (RDBMS) oriented structures, such as Oracle Spatial,
ESRI?s shapefile, and MapInfo?s TAB approaches.
For some time you have been moving these ESRI and MapInfo geodata in and
out of a TNT vector object,
analyzing, and editing it in TNTmips
processes including those requiring topology, and exporting it back to these
data structures. V6.8 added similar
vector import, edit, analysis, and export capabilities for use with Oracle
Spatial layers.
Vector and CAD
objects in a TNT Project File or
other product?s files have formal structures related to their original design
objectives: to store topology and drawings, respectively.
Importing and using database structured spatial information in a CAD or
vector object can quickly change its structure.
For example, maintaining topology during some operations on a vector
object could alter the structure of the attached attributes to the extent that a
complex database structure is created if there is a requirement to export it
back to the external RDBMS. Furthermore
the uncontrolled, freeform structure of CAD data does not match up well with
that of a rigorous RDBMS?s structure and use.
Directly Use Oracle
Spatial Layers.
Using the TNT
import and subsequent export is appropriate if the object is topologically based
and editing and analysis are not too complex in structure as in a shapefile.
If complex changes are made in an Oracle Spatial layer, then subsequent
cleanup of the tables in Oracle may be required.
However, the best way to directly view these kinds of RDBMS objects as a TNT
layer and to do non-topological editing would be to perform these operations on
them directly in their native RDBMS system.
To this end, for RV6.9 a new
object type called a ?shape? object has been added to the TNT
primary object types (now raster, vector, CAD, shape, TIN, and RDBMS) in a
Project File. As yet, this shape
object has limited use and only Oracle Spatial is as yet supported in RV6.9.
The new object has
a structure that is designed to be parallel to and to better accommodate the
non-topological, table-oriented graphical data structures defined in such file
structures as ESRI?s shapefile, Oracle?s Spatial (OS) layers, and
MapInfo?s TAB files. Via this new spatial object, these spatial database
structures can be directly linked to for direct use in the TNT
processes. When the links are
formed a spatial object is created in the Project File to contain the
information (subobjects) that TNT
processes need to use that object. This
is called a ?stub? or ?link? shape object since it does not contain the
original spatial data. Often
however, you merely think of it as a shape object since it functions just as if
the shapes contents were stored directly in the internal shape object.
This is similar to the direct links TNT
makes to raster objects. For
example, when a link is made to a JP2 raster object the contents of the original
JP2 raster are not imported or changed. The
stub or link raster object merely collects and maintains all the descriptive
data (for example, row, column, data numeric type, histogram, ?), which the
RVC read/write process needs to access to read that JP2 file and present it to a
higher TNT process just as if it
were stored as an internal raster object. The
new shape object functions in this same fashion.
It permits you to autolink directly to the supported external database
structures and stores within the link the properties RVC needs to describe this
new, shape-oriented, external data structure as if it were actually imported as
a TNT shape object.
As work progresses, these data types will be also imported into and used
as this shape object in addition to directly linked and used via this shape
object structure.
Progress to Date.
The initial and
primary objective in V6.9 is to
permit you to autolink to Oracle Spatial layers and immediately use and display
them as a layer in a TNTmips
composite view. When this ability
is available for ESRI?s shapefiles and MapInfo?s TAB files, modifications
will be made so that these structures can be directly imported and exported from
this spatial object. This will
subsequently lead to the modifications of the higher level TNT
processes to support other uses via the direct link or in the internally stored
shape object.
The RVC Project
File structure has been expanded to define and store shape objects.
Oracle Spatial can be directly linked to, and displayed as a layer or
component in a TNT composite view
(without styles or attributes). Development
effort is now focused upon supporting TNT
actions on these shape object layers such as selecting elements and displaying
attributes. Linking to shapefiles
and TAB files is also a priority.
Raster
Import.
TNT
support for the SDTS, ENVI, and NITF formats has been updated and improved.
A
raster object can be created by importing XYZ text strings.
Now values can be missing and result in null cells.
You can now also set the raster objects cell size by specifying the cell
spacing.
Arc
Shapefile Point Symbols Import/Export.
TrueType Fonts
Required.
The styles for
points in ESRI Arc shapefiles are stored as TrueType glyphs in the AVL (ArcView
Legend) file associated with the SHP file.
In ArcView a multiple color/component point style is defined by
compositing multiple glyphs. You can now import these styles and export them to
an AVL file created as part of the shapefile export process.
The attached color plate entitled Converting Symbols to/from
Shapefiles provides additional details and illustrations in Arc and TNT
views.
Since the AVL file
references and, thus, uses glyphs from TrueType fonts, this import and export of
point styles works if you have the TrueType fonts available. If you have ArcView
installed, you will have these fonts available or you can use any other TrueType
font that your TNTmips system and
the user of the Arc shapefile have in common. This is the same old TrueType
license issue encountered in many other import, export, or print-to operations
and in all portable data formats.
Handling Relative
Scale.
If a style element
used in a TNT product has a map
scale assigned, all styles are exported using a map scale.
ArcView can only handle the conditions of ?no? mapscale or ?all?
where all the elements must use the same relative map scaling.
If all TNT style elements use
a design scale, they are exported as non scaling symbols for ArcView.
Similarly one reference scale is set for all styles imported from an AVL
file that has one. Latitude/Longitude
should be used to export a style based on map scale.
Oracle
Spatial Layer Import.
When an OS Layer is
imported, you can now specify which TNT
topology you want to have in the vector object being created: planar, network,
or rigorous polygonal topology). As
with other imports you can also optionally create standard attribute tables and
element ID tables.
Autonaming is now
available if multiple Oracle Spatial layers are being imported into multiple
vector objects.
Open
DataBase Connectivity (ODBC).
ODBC is supposed to
provide a convenient and standard means to communicate with another software
vendor?s RDBMS. This is fine in
theory but not so well implemented in practice.
Each RDBMS developer must provide their own ODBC driver as part of their
product. No one certifies that each
version of each vendor?s ODBC driver is 100% correct.
As a result, there are multiple versions of each RDBMS?s ODBC driver,
which are frequently changed, and all can have subtle, but often fatal
consequences to other products such as the TNT
products.
An additional
characteristic of using the ODBC driver, instead of the RDBMS?s internal
proprietary control protocol, is that back communications are weak.
Key in this is that the ODBC connected database system will not notify a
linked system when any of its myriad of tables has been changes.
TNTmips can keep track of the
changes it has made in linked databases and attributes and redisplay, rebuild
indices, and so on. However, when
some other program or the RDBMS manipulates a table or many tables, most do not
provide notification of these record changes to ODBC linked software.
Since the RDBMS to which the TNT
products link can be complex, it is not practical to constantly check all these
tables to determine when and if they have been changed.
The simplest example of this is that a TNT
tabular view of an ODBC linked table is not updated for a change you or someone
else makes to that table via a program external to your TNT
product. RV6.9
introduces several new options concerning updating the tabular view and other TNT
actions.
Refreshing
Tabular Views of External RDBMS.
Background.
TNT
products sense changes to tables in the TNT
internal RDBMS and automatically refresh the Table View or take other
appropriate actions. TNT?s
connections to Oracle Spatial layers use OO4O and are direct.
This direct connection to Oracle can also be used to inform a TNT
process that tables have been changed.
A table being
viewed in a TNT product may also
reside in some other external RDBMS and is being used and viewed via a link to
it using its ODBC driver. Unfortunately,
ODBC can not notify the linked program (in this case any TNT
process) that the source table has been altered by some activity in that
external RDBMS.
It would be
possible for the TNT processes to
constantly reexamine the external tables or their indices via ODBC to see if
they have changed, but this would be time consuming and slow. Thus,
for use with ODBC connections, a button has been added to the tabular view to
refresh the table at any time. An
option is also available to automatically refresh the table at an interval you
can specify in the preferences you set for each table.
These features are illustrated in the attached color plate entitled Refresh
Tabular Views of Linked Tables.
Manual Refresh.
Each TNT
tabular view now has a ?Refresh? button, which will refresh that table
showing any changes that have been made to the corresponding ODBC linked table.
Nothing else will happen in TNTmips
when you do a refresh except the table will refresh and show any changes in its
contents.
Automatic
Refresh.
Each specific TNT
tabular view has a Preferences option setup dialog.
An option has been added to it to designate a time interval that you wish
that tabular view, whenever open, to be auto refreshed from its ODBC linked
table. Nothing else will happen in TNTmips
when you do a refresh except the table will refresh and show any changes in its
contents.
Visual Basic
Refresh.
To automatically
refresh a TNT tabular view, you can
now put a member in your VB class that SML
checks to see if the external database has been changed by your VB program.
If
you detect in SML that your VB
program indicated that it changed the external table, call the new SML
function TableTriggerRecordChangedCallback (table, record number) where record
number is optional. This will
notify RVC that the table has changed and it will trigger redraws of table
views, pin map pins, and so on. If
you identify and specify the record number, it will only redraw the pin for that
record. If you can not determine a
record number, omit it and the whole tabular view, pin map, and so on, will
redraw. Similar programs could be
written in C++ or Java to use this ActiveX callback via SML.
Refer to the detailed SML
section below for more information on this topic.
Map
Projections and Coordinate Systems.
Datums.
The
following new datums are supported:
-
European Libyan
Datam 1979 (Amal),
-
Nahrwan 1967
Qatar including
conversions,
-
Hermannskogel
datum for Croatia, Serbia, and Bosnia-Herzegovina, and
-
Japan-19 Plane Orthogonal system can now be used with multiple datums
DV7.0?Supporting
OpenGIS?s Spatial Referencing Coordinates Specifications.
DV7.0
will soon support the OpenGIS Consortium (OGC) Spatial Referencing Coordinate
Specifications which can be found in abstract form at www.opengis.org/
specs/?page=abstract. This OGC
standard is now incorporated into the ISO standards, which contain its technical
specifications.
Abstract:
ISO 19111:2003 Geographic information?Spatial referencing by coordinates.
?ISO
19111:2003 defines the conceptual schema for the description of spatial
referencing by coordinates. It describes the minimum data required to define
one-, two- and three-dimensional coordinate reference systems. It allows
additional descriptive information to be provided. It also describes the
information required to change coordinate values from one coordinate reference
system to another.
?ISO
19111:2003 is applicable to producers and users of geographic information.
Although it is applicable to digital geographic data, its principles can be
extended to many other forms of geographic data such as maps, charts, and text
documents.?
Virtual (Computed)
Database Fields.
Implied one-to-one
table attachments are now permitted for nodes to allow the use of virtual
(computed) fields with nodes.
*
Orthorectification of QuickBird and IKONOS Images.
Introduction.
A
new capability has been integrated into TNTmips
to permit you to produce accurate orthoimages from a single satellite image that
is provided with a file containing its Rational Polynomial Coefficients (RPCs).
Both IKONOS and QuickBird images can now be ordered with this file containing
the RPCs for that specific scene or subscene.
If an accurate DEM is also available for the area of the satellite scene,
the RPCs can be used in TNTmips to
convert the original images into orthoimages.
This procedure can be used with IKONOS and QuickBird images only, repeat only,
if they are obtained from their respective vendors in the correct format with
these RPCs.
IKONOS
IMPORTANT NOTE: Order only Space Imaging?s Geo Ortho Kit Product
for use in this operation!
QUICKBIRD
IMPORTANT NOTE: Order only Digital Globe?s Ortho Ready
Standard Product for use in this operation!
Using
this new TNT process can produce
satellite orthoimages whose ground surface positional accuracies can approach
the image?s cell size for areas of bare terrain.
A qualitative examination of the results achieved in TNTmips
using this approach in comparison to conventional airphoto-produced orthophotos
as illustrated in the attached color plates.
The
color plate entitled Orthorectification Results for QuickBird compares
the results for a portion of an RPC corrected scene acquired on 9 July
2002
from
an elevation angle of 77.5 degrees of the foothills near Castle Rock,
Colorado.
The small area of the orthoimage illustrated in this comparison has
topographic relief of 150 meters (492 feet) and the relief of the total scene is
526 meters (1726 feet) and an elevation range from 1756 meters to 2282 meters.
The Ground Control Points (GCPs) used were provided by DigitalGlobe.
The
color plate entitled Orthorectification Results for IKONOS compares the
results for a portion of an RPC corrected coastal scene acquired on 8
January 2001
from
an elevation angle of 81.2 degrees of the La Jolla Mesa area a few miles north
of San
Diego, California
.
The small area of the orthoimage illustrated in this comparison has about
250 meters (820 feet) of relief, and the surface elevation of the total scene
ranges from 0 meters MSL along the Pacific
Ocean
coastline to 250 meters at the top on Soledad
Peak
near
the illustrated subportion. The GCPs used in building the RPC model for this
orthoimage were collected by this writer using a hand-held GARMIN sportsman?s
GPS.
IMPORTANT
NOTE: You do not have to know
anything about photogrammetry to use this procedure.
This new TNT
procedure can be used to rectify these images without requiring any of the
complex inputs and considerations of a traditional photogrammetric solution
using a rigorous sensor model. These kinds of complex photogrammetric analyses
are being constantly changed and improved by those who own and operate the
satellite. With the RPC approach,
the photogrammetry required to use the rigorous sensor model becomes the
responsibility of its designer and who else could do it better?
One of the references cited below (Hu and Tao, 2002) expresses this idea
in its abstract as follows:
?The
rational function model (RFM) is a sensor model that allows users to perform
ortho-rectification and 3D feature extraction from imagery without knowledge of
the physical sensor model. It is a
fact that the RFM is determined by the vendor using a proprietary physical
sensor model. The accuracy of the
RFM solution is dependent on the availability and usage of ground control points
(GCP). In order to obtain a more
accurate RFM solution, the user may be asked to supply GCPs to the data vendor.
However, control information may not be available at the time of data
processing or cannot be supplied due to some reasons (e.g., politics or
confidentiality). ?
MicroImages
can readily extend this new TNTmips
image rectification to more image types when other image satellite operators
begin to supply their scenes/subscenes with RPCs.
It is likely that this RPC approach to image rectification will also be
adopted by other image satellite operators to permit you to orthorectify their
products at your desktop. Why?
Because the operators of these satellite imaging systems can not readily obtain
the accurate DEMs and XYZ GCPs that you can acquire and use for your local
areas. For a variety of reasons,
acquiring accurate GCPs and an accurate DEM may be restricted in many locations
and nations. However, you have
direct access to your project areas to collect the GCPs or you can work in your
native language with local collaborators who do.
Thus, this new RPC approach to modeling image distortion permits these
sensitive, restricted, and/or classified GCP and DEM data to be used locally
within the restrictions imposed upon them.
Furthermore the RPC approach, as contrasted to the rigorous model
approach, can be applied to the partial image scenes now sold by both these
satellite image operators.
Background.
Digital
orthorectification of any aerial or satellite image requires a mathematical
model of the imaging system, so that the position of each cell in the raw image
can be related mathematically to its corresponding three-dimensional position on
the ground. Using an accurate
imaging model and an accurate digital elevation model (DEM), a satellite image
can be processed to create an orthorectified image in which each image cell has
been restored to its correct geographic position.
Remote
sensing satellites, such as IKONOS and QuickBird, build up each image
sequentially, scanline by scanline, as the satellite moves forward in orbit.
Different parts of a single image are thus captured at different times
from different satellite positions. As
a result, a rigorous photogrammetric description of the imaging geometry that
models all of the physical elements of the system can be exceedingly long and
complex. For example, the IKONOS
rigorous sensor model is 183 pages long. The
actual orthorectification procedure using such a rigorous model, which is
typically proprietary, requires many image-specific parameters and an
extraordinary amount of computation.
Space
Imaging (IKONOS) and DigitalGlobe (QuickBird) have, therefore, both adopted the
Rational Polynomial approach to distribute the image geometry model with their
ortho-ready images (images suitable for orthorectification by the customer).
A rational polynomial model is a simpler empirical model relating image
space (line and column position) to the coordinates of corresponding points on
the ground (latitude, longitude, and surface elevation).
All of the various physical effects of the imaging system are boiled down
into two mathematical functions: one relating these sets of coordinates in the
image line direction and the other for the image column direction.
Each of these functions is a ratio of two cubic polynomial expressions,
which leads to the name Rational Polynomial method.
The image providers compute the unique rational functions for each image
using the satellite?s orbital parameters and their rigorous sensor model and
distribute the coefficients of these rational functions (RPCs) as a text file
accompanying the ortho-ready image. Computation
of the orthorectified image using a DEM and these RPC coefficients is fast and
accurate.
Both
these satellite operators sell their lowest-cost ortho-ready images for use in
this process. The georeference
information provided with their images is computed by these companies using only
the sensor model and satellite orbital parameters to project the image onto a
reference WGS84 ellipsoid at a reference elevation.
These are the map coordinates for the resulting computed image corners
are then provided with the image. Small
errors in the satellite position information, or the choice of an inappropriate
reference elevation, can have a large effect on the geometric accuracy of the
rectified image. Research has shown
that the RPC model supplied with an image can be adjusted to correct such errors
by using a small number of accurate 3D ground control points.
The adjustment improves the overall positioning of the raw image with
respect to the DEM, so that each raw image cell is more likely to be processed
with an appropriate elevation value and, thus, projected to a more accurate
position in the rectified image.
General Approach.
Introduction.
The first release
of the RPC orthorectification capability in TNTmips
had several prototypes prior to this official RV6.9
release. Initially the procedure
was designed and implemented as a new and separate photogrammetric-like process
since that is how competing products have presented it.
However, through a series of iterations, ultimately it was realized that
the procedures could be provided for you as simple modifications to the existing
TNT Georeference and Raster
Resampling processes. As a result,
this official release is elegant and simple to follow, and to apply it, you need
only adjust to a few changes in these 2 familiar and frequently used TNT
processes. An overview of this
approach is illustrated in the attached color plate entitled Rational
Polynomial Orthorectification of IKONOS/Quickbird Images.
Georeferencing.
You begin this
procedure in the Georeference process. This
is the same TNTmips process that you
have been using to associate 2D or 3D GCPs with image pixels and evaluating how
well they fit. In RV6.9
the Georeference process has been expanded to allow you to evaluate the fit of
your control points into the RPC model defined by its parameters supplied with
these images. You can then add,
delete, make active, or make inactive any control point to test its impact on
the current fit of the RPC model to the ground. You can also test the adjusted
model against sets of points that you enter as independent test points.
This is where the real setup activity is conducted as in this fashion you
carefully add a few accurate GCPs to define the RPC model for the specific input
scene. All these new georeferencing
capabilities are discussed in detail in the section below entitled Georeferencing.
All the latest
sensor and photogrammetric changes and complications are reflected in the
specific RPC file these satellite operators provide for with each specific
image. Thus, the photogrammetric
aspect of this new RPC approach can be improved with time, but is beyond your
control. The accuracy of the
orthoimages you produce using their RPC image kit in this new TNT
approach depends primarily upon the accuracy and cell size of your DEM and how
carefully and accurately you collect your GCPs and identify their positions in
the original image. A few, highly
accurate GCPs and associated positions will suffice. Once
you acquire these GCPs, you can reuse them along with your DEM for each new RPC
image that you acquire for this area from any imaging satellite.
If you can not
acquire survey quality GCPs and/or their exact pixels are hard to find in the
image, then you will need a larger, well distributed collection of GCPs.
You can then use the new statistical RPC model evaluation procedures
added to the Georeference process to help you evaluate your points, decide if
they are adequate and which to use, and predict the accuracy of the RPC model
they produce. This strategy is
covered in more detail in the new tutorial entitled Orthorectification Using
Rational Polynomials, which has been printed and shipped with your release
of RV6.9.
Input.
The GCPs you collect might number from 6 to 25?accuracy, key
topographic features, and a good distribution of the points over the scene are
more important than a large number of points!
When you are using the RPC model, the residual errors for your GCPs show
the departure of each point from the actual rectification model that will be
used later to orthorectify the image. You
can also view a list of the same points and residuals projected to that
hypothetical orthorectified image. The
attached color plate entitled Evaluating Control Points for Rational
Polynomial Orthorectification introduces some of these new features and
feedback information incorporated into the revised Georeference process to help
you establish an accurate RPC model. They permit you to experiment while you are
adding GCPs with the potential accuracy of the RPC solution achieved using any
selected subset of your GCPs for model control (which means, it solves the model
only on these active points).
Testing.
The Georeference process permits you to determine which of the points you
add are used in determining the fit of the RPC model to the image and which are
used only in testing the result. At
any point the georeference process can immediately test your current RPC model
by applying it to those GCPs you have designated for use only as test points,
and it then reports how the current model predicts the unknown positions of the
test points. These concepts are
illustrated in the attached color plate entitled Testing Rational Polynomial
Orthorectification. You can
even determine how well your current RPC model would fit the selected model or
test points if the Z values for the corresponding DEM cell were used instead of
the measured Z values you entered for the GCPs.
Why? Because ultimately the RPC model will only have the Z values from
the DEM cells to correct the positions of all the image pixels.
To provide this comparison, and to use the control points to adjust the
RPC model, Georeference requires that you identify the DEM to be used in the
orthorectification. All these
activities happen in the familiar but now considerably modified Georeference
process.
Raster
Resampling.
Simplicity.
After your GCP input, testing, and model evaluation are complete, the
final collection of GCPs and the RPCs are saved by the Georeference process as
subobjects of the input image?s raster object.
The actual full restitution of the scene or subscene is then performed as
a new Rational Polynomial resampling option added to the TNT
Automatic Raster Resampling process. Simply
select the raster object of the source image setup in the Georeference process
(and, thus, the automatically identified RPCs and GCPs), select the correct DEM,
and select the Rational Polynomial option from the list of other models (instead
of Affine, Plane Projective, Bilinear, or one of the other polynomials).
The time to complete the restitution is fast and similar to that for the
other familiar TNT resampling
procedures. The final X-Y
positional accuracy of each pixel in your orthoimage relative to the ground
surface depends heavily upon the pixel ground size, the X, Y, and Z accuracy of
your DEM, the X, Y, and Z accuracy of your GCPs, and your accurate placement of
them in the input image.
Flexibility.
You are using the familiar TNT
Automatic Raster Resampling process, not some specialized photogrammetric
approach and process. Thus, you are
able to use any of the many options this flexible process provides.
The input image and DEM do not need to be trimmed or converted to a
common projection or cell size. However, the orthoimage can only be produced for
the common input area and any mismatched area requested will be filled with the
selected null value. For the output
image you can choose the output cell size, projection, orientation, clipping
extents, JPEG2000 lossless compression, and so on.
Summary of the
Input Requirements.
It is important
that you understand the following about this new TNT
process. These same considerations
apply to other software packages that use the Rational Polynomial method to
produce orthorectified satellite images.
You must purchase
the correct QuickBird or IKONOS ortho-ready product type; RPC orthorectifcation
can not be used on any other of the product types supplied by these image
vendors.
You must have
available, or create, an accurate DEM of the area to be orthorectified. TNTmips
provides several methods of producing a DEM ranging from digitized contour maps
to surface modeling other data sources. Obviously
the quality of the resulting orthoimage produced by TNTmips
will depend upon the cell size and accuracy of this DEM.
Even with a highly
accurate DEM you will achieve the highest accuracy from this process only if you
can collect accurate ground control points well distributed over the image.
The control points can be reused repeatedly for the same area.
You can acquire GCPs with a hand-held sports GPS unit, survey quality GPS
unit, or from survey monuments or other positions determined from detailed
contour maps.
You can optionally
use a collection of additional independent test points to evaluate the accuracy
of your GCPs, which will in turn determine how accurate the orthoimage?s
pixels are relative to their actual ground position.
However, this will not evaluate how the cell size and accuracy or
inaccuracy of your DEM will affect the positional accuracy of pixels in your
final orthoimage.
Tutorial Booklet.
A tutorial booklet
providing more background and a detailed step-by-step approach to producing an
orthoimage from an RPC satellite image kit is available but was not finished in
time to be included on the RV6.9 CD.
To help introduce you to this new capability, this booklet entitled Orthorectification
Using Rational Polynomials has been printed and accompanies this MEMO.
One of the example exercises in this booklet uses a small IKONOS image as
sample data. This image sample and
the associated DEM needed for this tutorial are part of the sample data provided
with TNTlite.
This sample data and the online PDF version of this new tutorial can be
obtained from microimages.com.
Pan Versus
Multispectral Issues.
QuickBird and
IKONOS use separate detector arrays for collecting their panchromatic and
multispectral images. In addition
to having different cell sizes, these detector arrays are offset from each other
and image the ground cells from somewhat different angles.
Thus, the parallax introduced by the terrain relief will be different for
the pan and multispectral representation of each ground cell.
In areas of large relief this difference can be significant, amounting to
a meter or more, which is a whole cell or more in the pan image.
These two types of images will come with their own separate RPCs.
Each must be rectified separately into orthoimages.
However, if an accurate DEM and GCPs are available, they can be processed
in TNTmips into a common cell size
and projection and accurately overlaid.
If pan and
multispectral images are to be combined into a color image using pan-sharpening,
this step should be performed after each component has been orthorectified.
If you pan sharpen before the rectification is applied, one of these
image sets will end up being rectified using the RPCs of the other and this will
introduce positional errors.
Your collection of
GCPs can be used in both the pan and multispectral images.
You will need to separately enter them into the pan and multispectral
images in the Georeference process. If
you are acquiring GPS points for control, plot your GCPs in the field on
enlarged sections of prints of the pan image or any higher resolution images
(neither of which need to be orthorectified).
Then use these prints in the office to assist you in locating these
points on the pan image in the Georeference process.
You can then use their location in the pan image to assist you in finding
their corresponding position when georeferencing the lower resolution
multispectral image.
References.
Since
this is a new and technical image concept of wide interest, the following are
some additional references describing the use of the rational polynomial
approach to rectifying satellite images.
QuickBird
? Geometric Correction, Processing and Data.
by Phillip Cheng, Thierry Toutin, Yun Zhang, and Mathew Wood.
EOM, May 2003, pp. 24-30.
Block
Adjustment of High-Resolution Satellite Images Described by Rational
Polynomials. by Jacek Grodecki
and Gene Dial. Photogrammetric
Engineering and Remote Sensing. Vol.
69, No. 1, January 2003, pp. 59-68.
Abstract:
?This paper describes how to
block adjust high-resolution satellite imagery described by Rational Polynomial
Coefficient (RPC) camera models and illustrates the method with an Ikonos
example. By incorporating a priori
constraints into the adjustment model, multiple independent images can be
adjusted with or without ground control. The
RPC black adjustment model presented in this paper is directly related to the
geometric properties of the physical camera model.
Multiple physical camera model parameters having the same net effect on
the object-image relationship are replaced by a single adjustment parameter.
Consequently, the proposed method is numerically more stable than the
traditional adjustment of exterior and interior orientation parameters.
This method is generally applicable to any photogrammetric camera with a
narrow field of view, calibrated, stable interior orientation, and accurate a
priori exterior orientation data. As
demonstrated in the paper, for Ikonos satellite imagery, the RPC block
adjustment achieves the same accuracy as the ground station block adjustment
with the full physical camera model.?
Bias
Compensation in Rational Functions for Ikonos Satellite Imagery.
Clive S. Fraser and Harry B. Hanley. Photogrammetric Engineering and
Remote Sensing. Vol. 69, No. 1,
January 2003, pp. 53-57.
Abstract:
?A method for the removal of
exterior orientation biases in rational function coefficients (RPC) for Ikonos
imagery is developed. These biases,
which are inherent in RPC?s derived without the aid of ground control, give
rise to geopositioning errors. The
3D positioning error can subsequently be compensated during spatial intersection
by two additional parameters in image coordinates.
The resulting bias parameters can then be used to correct the RPC?s
supplied with Ikonos Geo imagery such that a practical means is provided for
bias-free ground point determination, nominally to meter-level absolute
accuracy, using entirely standard procedures on any photogrammetric workstation
that supports Ikonos RPCs. The
method requires provision of one or more ground control points.
Aside from developing the bias compensation method, the paper also
summarizes practical testing with bias-corrected RPCs that has demonstrated
sub-meter geopositioning accuracy from Ikonos Geo imagery.?
Error
Tracking in Ikonos Geometric Processing Using a 3D Parametric Model.
Thierry Toutin. Photogrammetric Engineering and Remote Sensing.
Vol. 69, No. 1, January 2003, pp. 43-51.
Abstract:
?Thirteen panchromatic (Pan) and
multiband (XS) Ikonos Geo product images over seven study sites with various
environments and terrain were tested using different cartographic data and
accuracies with a 3D parametric model developed at the Canada
Center
for Remote Sensing, Natural Resources
Canada.
The objectives of this study were to define the relationship between the
final accuracy and the number and accuracy of input data, to track error
propagation during the full geometric correction process (bundle adjustment and
ortho-rectification), and to advise on the applicability of the model in
operational environments.
?When
ground control points (GCPs) have an accuracy poorer than 3 m, 20 GCPs over the
entire image are a good compromise to obtain a 3- to 4-m accuracy in the bundle
adjustment. When GCP accuracy is
better that 1 m, ten GCPs are enough to decrease bundle adjustment error of
either panchromatic or multiband images to 2
to 3
m. Because GCP residuals reflect
the input data errors (map and/or plotting), these errors did not propagate
through the 3D parametric model, and the internal accuracy of the geometric
models is thus better (around a pixel or less).
?Quantitative
and qualitative evaluations of ortho-images were thus performed with either
independent check points or overlaid digital vector files.
Generally, the measured errors and a 2- to 4-m positioning accuracy was
achieved for the ortho-images depending upon the elevation accuracy (DEM and
grid spacing). To achieve a better
final positioning accuracy, such as 1 m, a DEM with an accuracy of 1 to 2 m and
with a fine grid spacing is required, in addition to well-defined GCPs with an
accuracy of 1 m.?
Rational
Functions and Potential for Rigorous Sensor Model Recovery.
Kalchang Di, Ruijin Ma, and Rong Xing U. Photogrammetric Engineering and
Remote Sensing. Vol. 69, No. 1,
January 2003, pp. 33-41.
Abstract:
?Rational functions (RFs) have
been applied to photogrammetry and remote sensing to represent the
transformation between the image space and object space whenever the rigorous
model is made unavailable intentionally or unintentionally.
It attracts more attention now because Ikonos
high-resolution images are being released to users with only RF
coefficients. This paper briefly
introduces the RF for photogrammetric processing.
Equations of space intersection with upward RF are derived.
The computational experimental result with one-meter resolution Ikonos
Geo stereo images and other airborne data verified the accuracy of the upward RF-based
space intersection. We demonstrated
different ways to improve the geopositioning accuracy of Ikonos Geo stereo
imagery with ground control points by either refining the vendor-provided Ikonos
RF coefficients or refining the RF-derived ground coordinates.
The accuracy of 3D ground point determination was improved to 1 to 2
meters after refinement. Finally we
showed the potential for recovering sensor models of a frame image and linear
array image from the RF.?
3D
Reconstruction Methods Based on the Rational Function Model.
C. Vincent Tao and Yong Hu. Photogrammetric
Engineering and Remote Sensing. Vol.
68, No. 7, July 2002, pp. 705-714.
Abstract:
?The rational function model (RFM)
is an alternative sensor model allowing users to perform photogrammetric
processing. The RFM has been used
as a replacement sensor model in
some commercial photogrammetric systems due to its capability of maintaining the
accuracy of the physical sensor models and its generic characteristic of
supporting sensor-independent photogrammetric processing.
With RFM parameters provided, end users are able to perform
photogrammetric processing including orthorectification, 3D reconstruction, and
DEM generation with an absence of the physical sensor model.
In this research, we investigate two methods for RFM-based 3D
reconstruction, the inverse RFM method and the forward RFM method.
Detailed derivations of the algorithmic procedure are described.
The emphasis is placed on the comparison of these two reconstruction
methods. Experimental results show
that the forward RFM can achieve a better reconstruction accuracy.
Finally, real Ikonos stereo pairs were employed to verify the
applicability and the performance of the reconstruction method.?
Updating
Solutions of the Rational Function Model Using Additional Control Information.
Yong Hu and C. Vincent Tao. Photogrammetric
Engineering and Remote Sensing. Vol.
68, No. 7, July 2002, pp. 715-723.
Abstract:
?The rational function model (RFM)
is a sensor model that allows users to perform ortho-rectification and 3D
feature extraction from imagery without knowledge of the physical sensor model.
It is a fact that the RFM is determined by the vendor using a proprietary
physical sensor model. The accuracy
of the RFM solution is dependent on the availability and usage of ground control
points (GCP). In order to obtain a
more accurate RFM solution, the user may be asked to supply GCPs to the data
vendor. However, control
information may not be available at the time of data processing or cannot be
supplied due to some reasons (e.g., politics or confidentiality).
This paper addresses a means to update or improve the existing RFM
solutions when additional GCPs are available, without knowing the physical
sensor model. From a linear
estimation perspective, the above issue can be tackled using a phased estimation
theory. In this paper, two methods
are proposed: a batch iterative least-squares (BILS) method and an incremental
discrete Karman filtering (IDKF) method. Detailed
descriptions of both methods are given. The
feasibility of these two methods is validated and their performances are
evaluated. Some results concerning
the updating of Ikonos imagery are also discussed.?
A Study
on the Generation of the Komsat-1 RPC Model.
Hye-jin Kim, Dae-sung Kim, Hyo-sung Lee, Young-il Kim. no date. 4 pages.
see http://www.isprs.org/commission3/
proceedings/papers/paper129.pdf
Abstract:
?The rational polynomial
coefficients (RPC) model is a generalized sensor model that is used as an
alternative solution for the physical sensor model for IKONOS of the Space
Imaging. As the number of sensors
increases along with greater complexity, and the standard sensor model is needed
the applicability of the RPC model is increasing.
The RPC model has the advantages in being able to substitute for all
sensor models, such as the projective, the linear pushbroom and the SAR.
?This
report aimed to generate a RPC model from the physical sensor model of the
KOMPSAT-1 (Korean Multi-Purpose Satellite) and aerial photography.
The KOMPSAT-1 collects 510~730 nm panchromatic imagery with a ground
sample distance (GSD) of 6.6 m and a swath width of 17 km by pushbroom scanning.
The iterative least square solution was used to estimate the RPC.
In addition, data normalization and regularization were applied to
improve the accuracy and minimize noise. This
study found that the RPC model is suitable for both KOMPSAT-1 and aerial
photography.?
Rational
Mapper: A Software Tool for Photogrammetric Exploitation of High Resolution
Satellite Imagery. C. Vincent
Tao, Yong Hu, Wanshou Jiang. no
date. 6 pages.
see http://www.geoict.yorku.ca/project/rationalmapper/rationalmapper.htm
Background:
?The Rational Function Model (RFM)
has gained considerable interest recently in the photogrammetry and remote
sensing community. This is mainly
due to the fact that some satellite data vendors, for example, Space Imaging,
Thornton, CO, USA,
have adopted the RFM as a replacement sensor model for image exploitation.
The data vendor will supply users with the parameters of the RFM instead
of the rigorous sensor models. Such
a strategy may help keep the confidential information about the sensors and on
the other hand, facilitate the use of high-resolution satellite imagery for
general public uses (non-photogrammetrists) since the RFM is easy to use and
easy to understand.?
Continues on with a
easily understood description of the general idea.
An Update
on the Use of Rational Functions for Photogrammetric Restitution.
Ian Dowman and Vincent Tao. ISPRS.
September 2002, Vol. 1, No. 3. pp.
22-29.
Extraction:
?During the past four or five
years the photogrammetric community at large has become aware of the use of
rational functions for photogrammetric restitution.
This has been due largely to the need to use these for setting up Ikonos
data, as Space Imaging does not provide a physical sensor model with the image
data. Rational functions have been
used for some time for military use and their widespread acceptance in that area
has led to the proposal to the OGC for a standard for image restitution based on
rational functions on the basis of its universality: it can be used with any
sensor.?
For the complete
article see: http://www.isprs.org/publications/highlights/highlights0703/
22_HL_09_02_Article_Dowman.pdf
Geometric
Information from IKONOS. Zhigu
Hu. GIM International.
Vol. 17, No. 9, September 2003. pp.
42-45.
?Since
pixel size of panchromatic IKONOS imagery is one meter it is desirable
that the planimetric and height error after georeferencing be smaller.
Which method provides the high accuracy requirement?
The author tested three processing methods: (1) IKONOS RPC parameters
only, (2) an associated adjustment of RPC parameters with control points and (3)
a strict geometric model based on affine transformation.
The adjustment of RPC coefficients with one or two control points
improves accuracy significantly, especially in elevation.
Using only five control points, the strict
model yields very high accuracy of 5 to 12 cm in X, Y, and Z with one
pair of stereo images covering. The
method does not need RPC coefficients, whilst the coordinate system can be any
independent one. The strict model
represents a new step forward, both in theory and application, for
high-resolution IKONOS imagery processing.?
[In the first
sentence of this paper the author states: ?The
stereo pair of panchromatic IKONOS images with 1-meter spatial resolution used
in the test covers over 20 square kilometers of a quite
flat Beijing
suburb.? It is possible that
a DEM was not even used and the solution was performed using an average
elevation. How well will these
results extend to areas with typical topographic relief?]
*Georeferencing.
The
user interface for the Georeference process has had a significant upgrade for RV6.9,
and it also provides a new Rational Polynomial model for use in georeferencing
images supplied with an orthorectification model in the form of rational
polynomial coefficients (RPCs). These
changes are part of the new TNTmips
procedures that allow you to produce accurate orthoimages from satellite images
distributed with RPCs. A broader
discussion of these changes can be found in the major section above entitled Orthorectification
of QuickBird and Ikonos Images. They
are also outlined in more detail in the new tutorial booklet entitled Orthorectification
Using Rational Polynomials that is distributed in printed form with this
MEMO.
Control Point
List.
The
list of control point position coordinates and residuals entered in this process
has been updated to use a grid interface like that used in tabular views of
database tables. In the previous
text listing, headings could become misaligned with their column entries
depending on the interface font you were using and on which units you had chosen
for coordinates and residuals. The
new grid listing automatically keeps headings and columns aligned and also gives
you much more control over the layout and appearance of the list.
You can easily change the width of columns by dragging the column
boundary in the heading row to the left or right or rearrange the order of
columns by dragging a column heading to another position. These
improvements in the layout of this list can be seen in the attached color plate
entitled Evaluating Control Points for Rational Polynomial Orthorectification.
As
in V6.8, the status of control
points can be toggled between Active and Inactive states.
Inactive points are temporarily removed from the mathematical computation
used to find the best fit of the active points to the selected geometric
transformation model and to compute the error residuals for each point.
In the V6.8 text listing, the
status of inactive control points was indicated only by a minus sign
("-") next to the control point number.
The new grid listing uses the background color for each row in the list
to indicate the point status. The
colors for active, inactive, and the current selected point listings are subdued
versions of the colors used to draw the control point symbols in the
Georeference views. These colors
can be modified by selecting Colors... from the Options menu on the Georeference
window. These improvements in
identification of the status of each control point can also be seen in the
attached color plate entitled Evaluating Control Points for Rational
Polynomial Orthorectification.
Point Set
Statistics.
The
information and statistics listing at the bottom of the Georeference window has
been expanded to give you more information about the entire set of control
points. This is illustrated in the
control point lists in the same color plate noted above.
In addition to the previous cell size and projection data, the listing
now also includes computed RMS (Root Mean Square) Error values for the set of
points that are currently active and a separate listing for the inactive points.
Separate listings of Mean Deviation values are similarly shown for both
sets of points. Both of these
statistics provide measures of the average positional error for the active and
inactive point sets. You can use
these measures along with the error residuals for individual points to assess
the accuracy of your georeference control.
As always, these summary statistics are immediately updated when you add,
delete, or change the status of any control point.
You can therefore freely experiment with different combinations of
points, designating suspect points as inactive to see the resulting effect on
the error statistics and on the individual control point residuals.
Inactive
point status also has an additional use. If you have a sufficient number of
accurate control points, you can reserve some of them as test points.
Enter the main set of control points as active points as usual to set up
the georeference control, then enter the test points as inactive points.
Since you are presented with separate error statistics and residuals for
each set, you can readily assess the fit of the independent test points to the
georeference control provided by the active control points.
Rational Polynomial
Model.
In
TNTmips you use the Georeference
process to add these ground control points to the image prior to rectification.
This new feature in the Georeference process now allows you to use the
RPC model supplied with an ortho-ready image to compute the control point error
residuals and statistics, and it also uses your control points to adjust the
rectification model to provide a more accurate rectified image.
Implementation.
To
regeoreference a satellite image supplied with RPCs, choose the new Rational
Polynomial geometric transformation model from the Model menu in the
Georeference window. You are then
prompted to select the text file containing the image-specific rational
polynomial coefficients. The
process can read the standard RPC text formats for files supplied with IKONOS
and QuickBird images. These files
have RPCs as part of the file name (IKONOS) or RP as part of the file extension
(QuickBird). When you save the
regeoreferenced image, the coefficients are stored as a subobject of the
georeference object. When you
subsequently open the image in the Automatic Resampling process to perform the
orthorectification (or reopen it in the Georeference process), this RPC
subobject is automatically detected and the model menu set to Rational
Polynomial, so you are not prompted again for the RPC text file.
After
you have selected the RPC text file, you are also prompted to select a DEM,
which is used to compute residuals and to adjust the rational polynomial model.
The digital elevation model you use need not match the image area or cell
size. If there is only partial
overlap between the DEM and the image, only the common area is processed during
the rectification phase, and only the control points in the common area are used
to position the image and adjust the RPC model.
Since the accuracy of the final rectification in the Automatic Resampling
process depends in part on the elevation values assigned to individual image
cells from the DEM, the cell size of the DEM you use in georeferencing and
rectification should be as close as possible to that of the image.
For example, if you are working with a 4-meter multispectral IKONOS
image, a DEM with 10-meter cell size should produce more accurate rectification
results than a 30-meter DEM.
The
solution or adjustment of the rational polynomial functions in the Georeference
process requires that all elevations be referenced to the WGS 1984 ellipsoid.
However, elevations in the DEM you use for georeferencing or
rectification are instead most commonly referenced to the undulating geoid
surface that represents a global approximation of mean sea level.
The geoidal DEM elevations can be converted to ellipsoidal elevations by
subtracting the geoid height, the local height of the geoid (positive or
negative) relative to the ellipsoid. The
Georeference process performs this conversion for you using the geoid height you
enter in the final prompt dialog during setup.
Because geoid heights vary only gradually on a regional to continental
scale, a single average geoid height value can be used for a typical IKONOS or
QuickBird scene. You can find the
appropriate geoid height for your image area by entering the latitude and
longitude of the image center in one of several free geoid height calculators
available on the World Wide Web. Links
to these sites can be found in the Orthorectification Using Rational
Polynomials tutorial booklet.
3D Control
Points Required.
All
of the usual georeference tools are available when you work with the Rational
Polynomial model, including manual entry of map coordinates for points collected
in a GPS survey and use of a Reference View for transferring control point
positions from another accurate planimetric map or orthoimage.
In order to adjust and refine the RPC rectification model, however, the
control points you provide must have 3D coordinates?an elevation value
(Z-coordinate) in addition to the usual map (X and Y) coordinates.
Control point elevation values are not shown by default in the control
point list, but a selection on the Georeference window?s Options menu turns on
the Elevation column.
You
can acquire and manually enter the elevation values for your control points from
GPS survey data or from a topographic contour map.
If these elevations are referenced to the geoid (as in the case of
contour elevations), you will need to subtract the local geoid height value in
order to determine and then enter the required ellipsoidal elevation.
You also have the option of using the Set Z from Surface icon button to
set the point elevation from the corresponding cell value in the DEM you have
already selected at the beginning of the process.
In this case, the correction to ellipsoidal elevation is applied
automatically using your previously-entered geoid height.
If you have multiple sources of elevation values, you will need to
evaluate them to choose the most accurate source.
Keep in mind that these 3D control points are used to adjust the overall
RPC rectification model, so their accuracy affects the accuracy of the
orthoimage you finally produce.
When
you open an RPC image in the Georeference process, its nominal georeference
information is converted to control points at the four image corners.
Only map coordinates are supplied for these points, so their elevations
default to 0. If you plan to retain
these corner points, you should assign elevations to them from the DEM.
If you have at least 4 or 5 accurate control points, it is probably best
to delete the nominal corner points so that you have a consistent set of points
from a single source.
Most
published research on rational polynomial rectification models indicates that
even a few accurate ground control points can significantly improve the
rectification result. A large
number of control points is not required. Your
control points should be well-distributed over the image, and a representative
sampling of different elevations may be desirable as well.
Although a small number of points may be sufficient in theory, it is
difficult to provide points with equal accuracy, and evaluating the relative
quality of control points is a somewhat subjective process.
Using a larger number of points can minimize the negative impact of one
or a few less accurate points that may be difficult to positively identify and
eliminate. MicroImages is still
investigating the impact of the number of control points on the accuracy of the
final orthorectification result.
Residuals.
When
you use the Rational Polynomial model for georeferencing, the horizontal
control-point residuals are computed from the best fit of the adjusted model to
the set of active points. The Z residual is simply the difference between the
control point value and the ellipsoidal elevation from the DEM cell at that map
location. (If you set Z values from
the DEM, the Z residual will therefore be 0.)
If
you have georeferenced unrectified images of high-relief areas, you know that
even accurate control points can produce high error residual values when you use
the simpler geometric transformation models (Affine, Plane Projective) typically
used with images not intended for RPC orthorectification.
These models cannot account for the fact that image cells are displaced
from their correct relative positions by different amounts and in different
directions due to their elevations and locations in the scene.
When you use the Rational Polynomial model to georeference an image, the
model explicitly accounts for the effects of view geometry and elevation on the
horizontal positions of your points in the unrectified image.
So even though you are viewing an unrectified image, the horizontal
residuals effectively show how well each point would fit the resulting
orthorectified version of the image. You
can, thus, use the residuals and summary error statistics to directly gauge the
accuracy of the orthorectified image you would produce with the current adjusted
model. You can also change the
point status and/or use a set of test points, as discussed above, to evaluate
the accuracy of individual points and the overall set of active control points.
The
column and line positions in the control point list in the main Georeference
window refer to the unrectified image. The
horizontal residuals in this list likewise are computed from the original
control point line/column positions in this image space (with displacements
compensated for by the model as noted above).
When you use the Rational Polynomial model, a second control point list
window opens, called the Control Points Projected to Orthorectified Image
window. As the name implies, the
list in this window and its summary error statistics show the result of
mathematically projecting the control point positions forward through the
rational polynomial rectification model into the orthorectified image space.
The column and line values in this second window indicate the positions
the points would occupy in this output rectified image. The horizontal residuals
are computed using a global affine best fit between image and map coordinates in
the orthorectified image. This
second window, thus, shows you the control point list that would result if you
opened the actual rectified image in the Georeference process.
The horizontal residuals for the rectified version are almost identical
to those shown in the main Georeference window for the corresponding points,
though there may be slight differences due to the round-off errors that can
occur in such a complex computation. You
can therefore use the residuals and statistics in either window to evaluate the
relative accuracy of your control points.
Tutorial
Booklet.
A
tutorial booklet providing more information on georeferencing and rectifying an
RPC satellite image is available but was not completed in time to be included on
the RV6.9 CD.
To help introduce you to this new capability, this booklet entitled Orthorectification
Using Rational Polynomials has been printed and accompanies this MEMO.
The sample exercises in this booklet use a small IKONOS image and an
associated DEM that are part of the TNTlite
sample data provided with the TNT
products. This sample data and the
online PDF version of this new tutorial are available online from
microimages.com.
Spatial
Data Editor.
Line Labels.
Auto labeling of
lines will now filter lines or adjoined lines based on a minimum specified
length measurement and suppress the label for these lines under this length.
This helps keeps busy areas with many short different road names or other
labels from becoming unreadable at the current display scale.
Miscellaneous.
A
vector object being edited might have been optimized before it was loaded but
you have the option to optimize a vector layer with save turned off for faster
saving. To avoid omitting the step
to optimize the edited vector object, and compromise future performance, a
dialog now asks you to determine if you want to optimize this vector object.
The Reverse Line
Points tool can now be applied for ?selected? lines and ?all? lines. You
no longer have to select each line when the collection of lines can be selected
by query or some other means or when you wish to reverse the point order in all
lines.
Map
Layouts.
Print To
Illustrator.
When
converting a TNT layout to ADOBE
Illustrator (*.ai) format, you can now choose between CMYK and RGB options for
better color control .
PDFs Using SVG.
Advantages.
You
can embed an SVG layout prepared by TNT
products into an Adobe PDF file that you create in Acrobat.
The ubiquitous Adobe Reader without modification will use its built in
version of their SVG viewer with this content.
This is an excellent method of publishing an interactive SVG layout and
its embedded tools in a form everyone can use via their Adobe Reader without
acquiring a plug-in for their browser. These
embedded SVGs will provide their interactive features so that the coordinate
readouts, DataTips, and measurement tools, and other javascript tools will be
available. If you wish your SVG
layout to carry images into a PDF then you must choose the Embed Images option
when rendering this SVG layout from a TNT
layout.
Setup.
To add an SVG
layout to a new or existing PDF file for Windows or Mac OS X you must have the
SVGAnnots plug-in installed in Adobe Acrobat and downloadable from
www.planetpdf.com/mainpage.asp?webpageid=3250.
Limitations.
An SVG layout
embedded in a PDF can not be printed by Adobe Reader.
This can be advantageous if you do not want the content distributed
outside of an encrypted password protected PDF file.
At this time the
custom layer control visibility menu is accessed via a right mouse click
(control key and click for Mac OS X) does not work for embedded SVG content.
Layer control was only recently added to PDFs so this may simply be
something that is yet to be supported by the SVG plug-ins for Reader.
Render
to SVG.
Introduction.
Converting between
layouts of various vendors is a complex task since they are not well defined
file formats that can simply be exported or imported.
Layouts, including those prepared in the TNT
products, are designed as containers for a specific end use in electronic
publication or for printing. Popular
layouts such as SVG and PDF go even further by providing means of controlling
layers, using tools, and other similar functionality. RV6.9
continues to expand the functionality of preparing maps for distribution in an
SVG layout that is a completely open layout standard.
Now a new Render To SVG process is available on the Layout menu in
addition to the Print to SVG approach. As
a separate process, it provides a direct way to make conversions to an SVG
layout. It is used by TNTserver
to return SVG content to a browser.
Embedding Images.
Images and other
rasters used in the SVG layout can either be embedded or linked according to
your objectives. Whichever option
is selected, the format of these rasters can now be selected to be either PNG or
JPEG (V6.8 supported only PNG). Each
has its advantage depending upon the content of the raster and your choice of
lossless or lossy compression. Remember
when making this selection that choosing JPEG will create lossy rasters while
PNG rasters will be lossless compressed.
Embedding in
HTML.
You can now select
Embed in HTML (ActiveX) on the Render to SVG dialog.
This option will integrate the SVG layout into HTML by encapsulating it
with normal HTML. You should only
use and exploit this feature if you are sure your target audience will use the
SVG layout you produce with Microsoft?s Internet Explorer or other Windows
applications that support ActiveX. Do
not use this option if the SVG layout will be widely disseminated, such as via
the Internet. However, the
controlled conditions can often be met when it will be used with proprietary or
company-wide applications via a private intranet or virtual private network over
the Internet.
Stylesheets.
You can now create
and reference external stylesheets (V6.8
supported only internal stylesheets). Using
an external stylesheet is particularly useful if you wish to have more than one
SVG file reference the same stylesheet. The
Adobe SVG Viewer and Batik support gzip to compress these external files.
Optimize for Adobe
Illustrator.
This option ensures
that the SVG is organized to be more compatible with the import process in
Adobe Illustrator.
Clip to SVG.
The option to Use
View 1 Extents on the Render to SVG dialog will convert to SVG only that portion
of the TNT layout that matches the
extent of its 1st view. This
will clip the TNT layout to that
extent and produce a smaller SVG that is focused on the area of interest.
This is particularly appropriate where a smaller SVG will be faster to
download from your web site. This
feature is also used by TNTserver to
clip an SVG to the smaller extent requested by a TNTclient.
Sample Tools.
You can choose
which sample javascript tools to embed and supply with the SVG content.
V6.8 provided sample
javascripts for layer controls and coordinate displays.
DataTips.
If
you have set up a DataTip for your uppermost vector layer, it can now also be
defined in SVG. DataTips are not
supported for raster objects, thus if the upper most layer is a raster, the
vector layer supplying the DataTips (for example, street names) can be hidden
under a raster layer. If you choose
the Show DataTips option on the Render to SVG dialog the javascript tool to use
your DataTips will be added to the SVG. The SVG viewer or browser will then
interactively expose these DataTips for each associated graphical element. In
addition, the element (for example, line or polygon) to which the DataTip
applies can be optionally set to show up (a so called ?mouse over event?) in
an inverse color highlight or by blinking in inverse color.
SVG Versus Flash.
Many web sites now
use the Shockwave Flash (SWF) plug-in from Macromedia.
You may be aware that some serious patent legal wrangles are appearing
that may impact on this approach. SVG,
an open World Wide Web Consortium standard, has many useful features, especially
for cartography, including some that are not present in SWF.
To review these advantages in tabular format please see Comparing .SWF
(Shockwave Flash) and .SVG (Scalable Vector Graphics) file format specifications
at www.carto.net/ papers/svg/comparison_flash_svg/.
ABSTRACT:
?The comparison is based on both file format specifications
Macromedia
Flash
,
Open
SWF Filespec
and
SVG (http://www.w3.org/TR/SVG/index.html)
.
Not all Viewer/Plugins support the full range of the specifications yet.
Macromedia compensates some of the missing functionality through its authoring
systems - thus preventing developers to use advanced features by just writing
the file-format itself. So far I haven't found any link to the Flash MX .swf
file-format spec?if you have a pointer, please tell me. This means that this
comparison might not be complete regarding the newest swf version.
?Please
note that this a comparison for people dealing with integrated dynamic content
generation systems. We are aware, that both .SWF and .SVG have their particular
advantages/disadvantages?in some parts they are concurrencing each other, in
some not. There is an interesting verbal summary on reasons why one should
consider using SVG over MM Flash by Pete Schonefeld:
SVG is real Flash
.
Eric Mauvière wrote a paper about flash and SVG in cartographic context:
Flash,
SVG et les autres
(french).?
DV7.0?Tools and
Font Management.
Measurement
Tools.
You can use a
toggle on the Render to SVG dialog to add a javascript to your SVG to permit its
user to make interactive measurements by drawing lines and polygons with their
mouse. This tool is activated by an
icon added to the view of the SVG. When
the line or polygon drawing is complete, a pop in window will show its length or
area. Since you have started with a TNT
layout, these measurements will be calibrated to georeferenced layers in SVG and
will readout in the linear or area units you set up in your TNT
layout.
DataTips
Multiline DataTips
defined for vector layers in your TNT
layout can now be defined in the SVG content.
The new text styling that can be used in your DataTips in the TNT
layout is used to style the DataTips in the SVG layout.
The Mount Saint Helens
sample SVG provided in the Gallery at www.microimages.com/documentation/SVG.htm demonstrates this kind of DataTip and
styling. In this example, the
DataTip for the polygons representing water bodies shows their description,
area, and circumference and uses a tranparent color fill.
Text.
Managing the
conversion of text blocks in a TNT
layout to SVG content is complicated for the same reasons reviewed in earlier
issues of this MEMO. Many fonts are
proprietary and can only be used locally in a TNT
layout and not provided with the SVG. This
means that you can not guaranty that a specific font is available on the
platform using the SVG. This is
further complicated by the international nature of a TNT
layout, which can create SVG content in any language for a user who may not have
the corresponding multilingual font installed. Font
management by conversion to a portable representation of the font is currently
being added to Render to SVG.
*
Color Management.
Background.
Until recently only
empirical methods were available to manage color calibration from a scanner to
computer display to a color printer, thus, most software handled this as a trial
and error process. For example, you
created color swatches or patterns on the monitor with known RGB values and then
printed them on a specific printer. From
this, specific RGB colors could be assigned to features in your project based
upon the RGB identities in your printed samples or by visual interpolation
between selected samples. This
worked well for categorical data like polygon fills and line work and only
crudely for images where wide interpolation was required.
For images, since you owned only 1 expensive color printer, it worked
better to gradually develop skill in how to color balance an image on a screen
so that it created a suitable color print.
In other words, by trial and error you learned how to adjust the color
balance settings of the monitor and in the color balance process to produce the
desired color print regardless of how it looked on the screen. This was
definitely an art and had little computer science in it.
Next some
graphics-oriented software passed through a transition period where a color
monitor and printer could be interrelated with a spectrometer with difficulty
and patience. Color samples on a
color monitor could be characterized with a spot spectrometer or reference color
swaths and then printed on a color chart with many in between colors. Using this
approach some of the ?art? could be removed from the task of reproducing
what was on the monitor on the color print. The
procedure was most successful in faithfully defining spot colors for commercial
graphics.
Until recently all
but the most expensive color monitors gradually changed in color with age or
simply were constantly readjusted by each user.
Scanners used illumination bulbs that faded in intensity.
Most of our first color printers produced color only marginally and did
not maintain a consistent color balance. For
example, until recently color laser printer?s color balance drifted as various
amounts of each color toner were used?the 100th identical print did not have
the same color balance as the first!
Now, with wide
consumer interest in digital photography and good stable monitors and color
printers, easy and reliable color management is important.
This has forced hardware and software developers to widely adopt some
standard color calibration approaches for color management.
These standards have to apply across your many color image data sources
and color devices ranging from cameras, to triple monitors, multiple online
color printers of several sizes, and even to commercial printers.
To keep pace with these opportunities, color calibration has been
introduced into the TNT products.
General
Methodology.
Your color monitor
inherently operates in an RGB color space.
It produces color by filtering out the selected wavelengths of white
light or directly emitting light of the appropriate RGB combination.
The CMY and black inks used by your color printer act in a inverse sense
to absorb the incident white light and reflect back only the desired RGB
wavelengths. The latest photo
printers using 6, 7, or more inks are simply providing variations of CMYK inks
such as light or diluted cyan ink. Part
of color calibration is to move data between the RGB data used by your monitor
into the CMYK data required by your printer.
To do this, data is moved in and out of RGB color data values.
The scanner calibration creates RGB, your monitor modifies RGB for its
particular response curves, and your printer converts from RGB to its dithered
CMYK dots. Analysis software, such
as the TNT products, work with and
create color as RGB values, such as in a TNT
color composite raster object. It
is then the responsibility of the hardware vendors producing color input and
output devices to interact with these RGB values and create the corresponding
correct and consistent color with their product.
To meet these color
calibration requirements, standard procedures have been gradually adopted in the
computer and print industries to assist you in moving the color image you create
on your display to a reasonably good color representation on paper. Many
color monitors and printers now use this standard color calibration scheme, even
low cost devices aimed at desktop digital photography.
Microsoft calls their scheme Image Color Management (ICM) while Apple
uses a similar approach called International Color Consortium (ICC).
In fact the color ICM and ICC profiles for a physical device can be used
by either Windows or Mac OS X only requiring a change in their file extension.
Even the latest TIFF libraries, whose support is currently being
integrated into DV7.0 of the TNT
products, support tags to include these color profiles as a component of your
TIFF files. At the moment it is
unclear how these will be used with TIFF files.
One possibility is that they will be used to translate the data stored in
the TIFF to sRGB or to various other color models.
This would permit raw color values created by some device to be stored in
a TIFF file to which is added the ICC/ICM or other color profiles needed to
translate these values into sRGB.
Software is now
expected to create, store, and manage color using the standard-Red-Green-Blue (sRGB)
color model, which most analysis software including the TNT
products has long used. These color
profiles are translation tables that specify how that particular input or output
device (for example, printer, monitor, scanner, camera ?) will translate the
sRGB color values it receives into industry standard colors!
For example, if an sRGB red color of 200,20,20 is sent to a variety of
color printers via their ICM or ICC profiles, they will all more or less
reproduce the same red by translating it via their respective profiles.
The concept is applied to monitors in the same fashion.
It is the responsibility of the hardware manufacturer to provide a good
ICM/ICC profile for their device. As
a result, most printers and good quality monitors provide them and your
operating system deals with providing TNT
products access to them. The sRGB
color model has become so standard that some devices have built the color
profile translation to or from sRGB directly into their devices. Many
new, low-cost printers, such as those from Hewlett-Packard, directly accept the
sRGB data and translate it internally into standard color.
If you are using a Windows driver, it expects sRGB as its input.
Thus, if you are using an older HP printer, especially if it is large
format, to take advantage of TNT?s
color management you must choose to let its Windows driver do the dithering (see
section below entitled Printing).
All color is not
equal. As a result, a variety of
different color representations were developed earlier in the photographic
printing and color proofing/printing fields.
Thus, expensive color printers may come with a variety of profiles.
For example, MicroImages? Xerox solid ink color printers used to print
the attached color plates provide profiles for Vivid Color, SWOP Press,
Euroscale Press, Commercial Press, SNAP Press, DIC, Toyo, Fuji Proof, and of
course direct sRGB. Printers
developed specifically for color proofing will supply even more color model
choices.
You have probably
also noticed that you no longer get to adjust RGB, HIS, or equivalent color
balance knobs on your monitor. Now
you select in your operating system or application software which color model,
from a variety of color models, to use to determine the color balance of your
monitor to please your eye and to load the proper profile to relate it to the
sRGB data color model. This is
usually done in the display preferences provided by your operating system by
choosing from those it has stored for your specific monitor or were installed
separately with the monitor. Similarly,
you can choose the color model to use in the conversion of sRGB by the printer.
This is provided as an option somewhere on the print dialog from the
choices installed for that printer or provided by the operating system.
All of this sounds
wonderful even if a bit confusing. Alas,
not all color devices are created equal. The
human eye can distinguish a myriad of subtle color variations, next comes the
computer monitor and display board combination, and trailing far behind is the
color printer. All these have a
measurable color space called a gamut, which is a map of their color sensitivity
or range. So the key problem is
that you can easily create or acquire sRGB data within the gamut of your color
monitor that falls outside the color gamut of any printer.
Thus your software must have a means of dealing with sRGB colors that can
not be reproduced on your printer and it will be useful if you know what these
colors are and where they occur in the image on your monitor (which means,
exactly which area and cells). Many
articles supplying more details on this topic can be found on the internet.
It is interesting to note that if you search Google for ICM Color
Calibration from sRGB the first entry of 775 is to microimages.com to the
MicroImages MEMO of December 2003 announcing this capability for RV6.9
and if you search for International Color Management for sRGB the first
entry of 707 provides access to microsoft.com and their white papers on the subject.
Calibration.
X Server.
The
TNT professional products for
Windows use an X Server that must be set up to use a color profile. For Windows
this is set in the X Server Preferences dialog on the Options panel by choosing
to let the display driver do the color management or choose a specific ICM color
profile. For Mac OS X select
Colors: From Display on the X11 Preferences dialog. Illustrations and more
details are provided on the attached color plate entitled Color Management
for X Server and Views. UNIX
and Linux have similar procedures for setting up your X Server preferences.
Please see their specific vendor?s reference materials for assistance.
Monitor.
RV6.9
introduces the use of ICM/ICC color calibration into the TNT
products. You must make Windows
aware of your monitor?s ICM color profile by selecting it in the advanced
settings for your display. Often
you can select from more than one, which will set your preference for the tints
and tones on our monitor and establish the appropriate ICM translation between
your preference and sRGB color space. In
the Mac OS X your viewing preference and its ICC translation is selected using
System Preferences / Displays / Color. If
you are using an Apple monitor you have only a few choices including the direct
sRGB option.
Printer.
The color profile
used for your printer can be selected in the Printer dialog where you select the
specific printer to use and all the many other characteristics of your print
job. For Windows you select the ICM
profile on the Color Management tabbed panel of the printer?s Properties
dialog. Mac OS X provides your
choices for ICC profiles for each printer you select under File / Print on its
Printer Features dialog. For Epson,
Xerox, and many other manufacturers, you may have many choices of named ICM/ICC
color profiles matching various commercial color printing standards or personal
preferences. You may also have 1 or
2 standard choices or none for an older printer or an HP printer that expects
direct sRGB input. For most new HP
printers you will have no choices in either case as they directly accept only
sRGB.
Managing
Out-of-Gamut Colors.
As
has already been mentioned, you can use the import, linking, data manipulation,
and color balancing tools in the TNT
products or any other product to produce some sRGB colors in your objects and/or
on your monitor that can not be reproduced on your printer.
These are often the most saturated sRGB colors that are described as
?falling outside the color gamut of your printer,? in other words they are
outside its color range. You like
the color results on your screen, but the printer simply can not produce all of
the colors you are viewing or have created in an object or a layout.
The ICM/ICC color calibration procedure has to be told what to do about
this?how to map those out-of-gamut colors into the gamut of the printer!
Depending upon the rendering method selected, sRBG colors outside your
printer?s gamut will always be changed and those within its gamut may be
changed. Now in TNTmips
you can select your rendering intent for your TNT
print on the Rendering Intent option menu in the Profile tabbed panel of the
Page Setup dialog.
Perceptual
Rendering (best for images).
Use this method for
your images and other direct screen prints.
These sources will have a wider gamut, or range of colors, than your
printer. It compresses every sRGB
color proportionally to fit within the gamut of the printer, thus, preserving
the chromatic relationships between colors.
This will typically desaturate all your colors since your printer?s
gamut is not large enough to reproduce saturated colors. However,
once you are satisfied with what you are viewing as the current sRGB
representation of your image on the screen, this method will reproduce it as
faithfully as is possible on the printer. This is the method set at
installation by MicroImages until you change it to your preferred method.
Saturation
Rendering (best for graphics).
Use this method for
your maps made up of styled graphical elements (for example, roads, rivers,
solid filled polygons, and so). These
kinds of prints are made up of a few distinct colors, many of which you
deliberately selected to be saturated. This
method will modify the source or sRGB colors to exactly fill your printer?s
gamut while preserving the saturation of the colors such that some parts of the
modified RGB color space are expanded and others are compressed.
Since saturated colors are part of your objective in the graphical
representation of features, this method meets that objective.
Rendering
Complex Layouts.
Gradually TNTmips
and other products have made it convenient to integrate raster and vector and
other graphical materials not only in analysis, but also in publication.
Typically the printed product you now produce is from a map layout
combining color and grayscale images, saturated graphics drawn separately or
over the images, color legends, black crisp text, and significant areas of white
paper. Choosing either the
perceptual or saturation rendering methods is not optimal for this kind of
complex, composite layout. The
relative (a compromise method) and absolute colorimetric (the hard work method)
methods are available to print complex layouts.
Relative
Colorimetric Rendering (try first for composite map layouts).
This method will
adjust the brightness of all colors in the sRGB source so that every possible
sRGB color is within the gamut of the destination printer.
The white point of the sRGB input data (for example, 255R, 255G, 255B) is
converted into the printer?s white or what is called destination white (for
example, 0R, 0G, and 0B). For this
reason, this method is also called the ?graphics? or ?logo color? method
in the printing industry. All other
sRGB colors are then shifted in brightness relative to the change in the white
point. The resultant print
may be lighter or darker than its representation on the monitor.
When a complex map layout is printed, this rendering method will insure
that the white areas within its graphics, text blocks, and margins are white.
It will also make an overall brightness adjustment of all the areas of a
complex map so that all the source images, graphical lines and fills, logos, and
other components are inside the gamut or color reproduction range, of the
printer. This is also the method of choice when your printer has a narrow gamut.
Absolute
Colorimetric Rendering (a more complex approach for map layouts).
This method does
not change any source sRGB color that falls within the gamut of the destination
printer or other device. Colors
that are outside the gamut of the destination device are transformed to the
closest color at the gamut?s edge. White
areas may or may not be reproduced as no-ink, paper white.
Obviously this is a good method to apply to any printing device with a
large gamut such as a 6, 7, or more ink color photograph printer; sublimation
printer; and especially if the destination is a film printer.
This is the reason it is referred to as the ?proof,? ?match,? or
?colorimetric? method in the printing industry since it attempts to provide
a print that for the most part can be compared absolutely from printer to
printer.
If
you are using this method and some component of your map layout, for example an
inset image, is not reproduced appropriately, you need to alter it before this
proof printing. Using other
processes in TNTmips you can adjust,
stretch, edit, and otherwise change the sRGB color of any of your layout?s
components until this method prints that component and all other components to
your satisfaction. This may produce
the best single print of the map for your uses.
However, you can also require that any commercial printing process
produces results that match this proof print when supplied this same sRGB file
in TIFF or some other lossless raster format.
Soft Color
Proofing.
At
this point you are wondering how to determine if you have all the calibration
pieces in the right places, if your TNT
color management is set up correctly, and which color rendering method works
best. Furthermore, you are going to
encounter a wide variety of data sources and color printers on your network
requiring periodic reexamination of these same questions.
Visual
Inspection.
The
TNT products now provide you a print
preview method called soft color proofing.
After your ICM/ICC color calibration is setup, soft proofing will show
you how your sRGB data currently displayed in a view of an object, group, or
layout will be reproduced in color on your printer with the selected rendering
method. Soft proofing assumes you
have set up the ICM/ICC calibration of your monitor and that your monitor has a
significantly larger color gamut than your printer.
There are exceptional output devices such as color film writers, internet
processing labs, and others that can have a larger gamut than your monitor.
However, soft color proofing makes the assumption that all the colors your
printer can reproduce can be reproduced on your monitor.
The
procedure maps all sRGB color values in the objects in the current view through
your selected rendering method and your printer?s ICM/ICC color profile and
redisplays them. In this View
window, which you can zoom, alter, and so on as you choose, you can check how
the printer will preserve or alter the sRGB colors you have created.
For example, you can take a look at how well various water colors will be
separated or how saturated your graphics will be.
Use the toggle ?Proof to Screen? on the TNT
printing Page Setup dialog to redraw your current view as a soft color proof.
The attached color plate entitled Color Management for Printing and
Proofing illustrates how a display of a well color balanced sRGB image is
redisplayed to demonstrate several soft color proofs.
Using
Out-of-Gamut Alarming.
As
an option, your soft color proof can substitute an alarm color for every one of
the color cells making up input to the current view that will be changed from
their true sRGB value during printing because they fall outside the gamut of the
printer. The soft proof procedure
maps all sRGB color values through your selected rendering method and your
printer?s ICM/ICC color profile. From
this all sRGB colors that are outside the gamut of the printer can be
identified. Thus, via this option,
your color view is redisplayed assigning an alarm color to every pixel in the
view representing cells whose sRGB colors cannot be printed.
The default alarm color is red, but you can change it to any other color
you prefer.
The
optional, alarmed soft color proof permits you to quickly check in advance which
sRGB colors are impossible for your printer to reproduce and will be altered in
your print. You can also use the
alarmed soft color proof after printing to locate and review the significance of
these areas of color change in the print.
The illustrations
in the attached color plate entitled Color Management and Printer Profiles
compare the vastly differing areas of alarms produced by a printer with a wide
gamut and one with a narrow gamut. This
plate also illustrates how you can use this alarm system to point you toward
possible adjustments to your image before printing.
It illustrates how adjusting a single RGB value of the ocean portion of
this image to a different, but satisfactory, blue moves that color representing
this vary large area into the gamut, or color reproduction range, of the color
printer. You can always alter the
sRGB value of any large alarmed area in the image (for example, water or desert)
to a slightly different and perhaps equally suitable color of your choice that
is within the color gamut of the printer. If
you do not do this then, as explained above, your choice of rendering method
will do this for you automatically and perhaps produce a less desirable result.
Printing.
Support for 6, 7,
and more color printers has been implemented by sending sRGB to their drivers.
These printers improve the range of the color gamut by adding black,
light cyan (dilute cyan), light magenta (dilute magenta), and light gray (dilute
black) inks.
The TNT
products provide you with 3 different approaches to do the dithering for color
printing. These are to select
between dithering with the TNT
product, the printer via its driver, or the operating system.
The tutorial booklet entitled Printing was revised to cover more
material on this subject after the RV6.9
CDs were reproduced. Download the
PDF for this revised booklet from www.microimages.com/getstart/printing.htm.
Depending upon the printer and operating system you will need to select
the method that produces the best results.
The following are some of the advantages and disadvantages of each
approach.
Via the Operating
System.
You can choose to
let the operating system do the color management and dithering.
Advantages.
This method will provide the TNT
products access to ICM/ICC color profiles and let the operating system optimize
the color management. It can take
advantage of the higher resolutions supported by the printer.
For example, most new HP printers have a native resolution of at least
1200 by 1200 dpi or higher, but only accept input at 600 by 600 dpi and
internally dither it to produce the higher resolutions that match their fixed
printing resolution. This reduces
computation on the host and provides for faster communication of the input to
the printer.
Disadvantages.
This method can create large files and may be slow.
Also, commonly available drivers are automatically installed with Windows
but may not be available in other operating systems.
Furthermore, if they are available for other operating systems they often
are provided later in that product?s life cycle, can be more error prone, and
lack features.
Via TNT Using the
Printer Driver.
This method lets
the printer use its manufacturer?s driver to do the dithering.
Advantages.
This method permits TNTmips
to use the ICM/ICC color management and works with every operating system for
which the printer?s driver is available directly from the manufacturer and not
via the operating system. It can
also take advantage of the higher resolutions supported by the printer.
For example, most new HP printers have a native resolution of at least
1200 by 1200 dpi or higher, but only accept input at 600 by 600 dpi and
internally dither to use the higher resolutions.
This reduces computation on the host and provides for faster
communication of the input to the printer.
Disadvantages.
This method also creates a large temporary file.
Via TNT?s
Dithering.
This is a legacy
approach dating from earlier years when each software vendor had to support each
printer.
Advantages.
A small temporary file is created and often faster. Also
this may be the only method or method of last resort especially with older
printers.
Disadvantages.
This method can not use the ICM/ICC profiles as well as other related
options since color is affected by the dither pattern selected in the TNT
product. The resolution can also be
limited in some printers. Furthermore,
it is now difficult for MicroImages to add new printers for this method as
printer manufacturers aren?t interested anymore in documenting their printer
formats.
*
SML Scripting.
Introduction.
Improving and
expanding SML has continued to
receive considerable attention for RV6.9.
A complete overhaul of the function/class descriptions has been completed
and all functions now link to at least 1 example of their use.
The V6.8
tutorial booklet for SML has been
divided into two: Writing Scripts with SML and Building Dialogs in SML.
These new PDF booklets are now installed as part of RV6.9
of your TNT products and provide
twice the number of pages as the previous single booklet.
The ability of your
TNT products to be to linked to
other VB, C++, or Java programs has been expanded.
This provides a means of using your programs to establish 2-way
communications between your TNT
products and other programs and products.
Macro and Tool
Scripts can now be attached as script objects to groups and layouts for
automatic access as part of these containers.
Tools and information for customizing existing Tool Scripts and Macro
Scripts are available.
The SML
Debugger has been improved. It now
also provides a step timer mode for running scripts, which will help you
pinpoint where your script?s operation is being slowed by inefficient script
code.
In V6.8
you could use the internal X Windows approach or the XML approach to creating
your dialogs. If you use your
preferred XML editor to create dialogs you can now use SML?s
Document Type Definition (DTD) to validate that XML.
You can even use Visual Basic to create separate programs to provide your
dialogs and forms for use with SML.
Important new
classes have been added such as GUI_CANVAS to support drawing within custom
windows created by a script, GRE_LAYER_SCRIPT to provide access to SML
script layers in a view window, and TNTSIM3D to permit the use of SML
during simulations, and others.
Caveats.
All the efforts
discussed here to increase the usefulness of SML
have for several months restricted efforts to promptly add new functionality
(classes, methods, and functions) to SML
in response to your individual needs and requests.
Now that the usability of SML
has been greatly improved, your requested code additions can be added and new
requests can be more promptly addressed as received.
Sometimes we receive requests that will take a lot of effort (for
example, array handling) or are not appropriate for SML
(for example, image classification), and these will be assigned a lower
priority. However, continuing
additions to SML are a good reason
why you should periodically apply the free patches to your RV6.9
to gain access to them.
Expanded
Reference Booklets.
The online tutorial
booklet to help you learn how to use SML
has been doubled and divided into 2 booklets.
You still have available the booklet Writing Scripts with SML but
it has been completely revised and expanded to 60 pages.
Many new topics are now covered including the latest new features such as
how to use SML to communicate
between TNT products and with your
Visual Basic programs. Since the
reference material for SML has been
expanding rapidly some of it was moved to a second and new booklet entitled Building
Dialogs in SML. This new
booklet is focused upon how to build your SML
control dialogs using either of the approaches: the older X Windows/MOTIF
approach and the newer XML approach. Please
also note that now, although not yet covered in this booklet, you can also use
your own Visual Basic programs to create dialogs and forms and other user
interface components for your SML
scripts and to communicate between the TNT
products and other products.
* Expanded
Documentation/Examples of Functions/Classes
A software engineer
has been reviewing the online documentation for all the SML
functions and classes to determine if each has a description of purpose and
definitions for its parameters and methods.
This documentation is now available for every class and its member
variables and for every function and its parameters.
The script examples
in the online function documentation have been combined with all of MicroImages
available ?public? scripts into a large single library.
This collection of 206 files provides standalone scripts, Tool Scripts,
Macro Scripts, APPLIDATS, movie scripts, TNTsim3D
scripts, sample queries, and script fragments that contain example uses of SML
functions. Although the online
function documentation is presented to you in the same form as before, the
script examples shown are now drawn from this unified script library.
This work was not complete in time to be included on the RV6.9
CD. The PV6.9
has been modified to use this sample script library, but you will need to
separately download the script library itself from microimages.com.
The SML Editor in PV6.9 will
detect the absence of this library when you access the online function
documentation and attempt to view the sample script for a function.
In place of the missing example you will find instructions on where to
download the library and where to install it for use with PV6.9.
All of these sample
scripts are now being parsed automatically every night.
This helps to insure that the daily changes in the TNTsdk
that may impact SML are promptly
detected and repaired. The link
between each function and its example is also being tested nightly.
These efforts
for PV6.9 have significantly
improved SML?s supporting
information as follows.
-
All of the 324 classes, 557 class methods, and 794 method parameters have
descriptions.
-
All of the 976 functions and their 3074 parameters have descriptions.
-
There are examples for the use of 692 functions that are automatically
checked daily to insure that they are parsing and produce no warnings.
-
There are 192
functions that do not need examples as their use is obvious (e.g., mathematical,
logical, ?).
-
There are 92 functions that have no examples but these are all being implemented
at this time to shortly reduce this to 0.
Installing
Scripts.
In V6.8
the SML Macro Scripts and Tool
Scripts you installed to act on the spatial objects in your current view (or on
any other specified objects) were automatically added to the View windows icon
bar with your specified icon button and ToolTip.
Tool Scripts were also added by name to the Tool menu in the View window.
The installed Macro and Tool Scripts were then available from any View
window throughout TNTmips.
Some of you are adding many Macro and Tool Scripts, with the result that
there are so many script icons that the icon bar becomes confusing, and some of
the installed scripts might not be useable with the data currently being viewed.
During installation of a Macro or Tool Script in RV6.9,
you now have the option of omitting the icons and providing access only via the
Tool menu (for Tool Scripts) and a new Macros menu.
These and other adjustments to the process of gaining access to the SML
development tools and installing Macro
and Tool Scripts are discussed and illustrated in the attached color plates
entitled Macro Script Setup and Tool Script Templates.
As discussed in the following sections, you also have the option of
limiting the scope of the installation of Tool and Macro Scripts.
Adding Custom Tools
to Groups and Layouts.
Background.
Groups and layouts
are used to define and maintain the relationships between collections of TNT
geodata objects in various Project Files (or linked external files in other
formats). Creating a group or
layout organizes a collection of specific geodata into a meaningful unit and
defines how it is used together in a display group or layout, a map layout, an
edit session, an atlas, and so on. SML
can also be used to create complex analysis tools (Tool and Macro Scripts) that
are tailored to the data in 1 or more objects in a specific group or layout.
In V6.8 SML
Macro and Tool Scripts were added to every View window, where they were always
presented until manually removed. This
approach is suitable if your special tools can be generically applied to any
View, or at least to any raster object, any vector object, and so on.
However, this approach did not let you limit the installation of your
specialized tools to the specific groups or layouts that contain the appropriate
target data.
Implementation.
To streamline this
application of scripts for data-dependent custom tools in RV6.9,
you can optionally install Macro Scripts and Tool Scripts for use only with a
specific group or layout. The
installed scripts are then stored within the group or layout object.
When that group or layout is selected, the associated scripts are added
for your use to the menu bar or the icon and menu bar of the View window.
Unlike the previous approach outlined above (which is still available),
these menus and icons are not always part of the view.
They are only presented when the group or layout to which they are
attached is displayed. These new
procedures for managing and applying custom tools are illustrated in the
attached color plate entitled Use Layouts to Customize
TNTatlas/X.
Delivering scripts
as part of the layouts used in a TNTatlas
is a particularly effective way to add unique tools related to the specific
contents of the atlas and have them automatically installed for use in any TNTatlas
software. This is also illustrated
in the attached color plate entitled Use Layouts to Customize
TNTatlas/X.
A good example of a
data-dependent tool is an SML Tool
Script that makes use of the attributes of a particular vector layer in the
group. The Tool Script can prompt
the user for specific attribute values (for example, street name or other parts
of an address) and then execute an SML
selection query for the corresponding element type.
The script might combine and test data from a number of attributes and
return a variety of results. While
this Tool Script can be very useful, it is data-dependent and, thus, works best
if associated with a group or layout that loads the selected layer(s) for which
it was designed. This type of
data-dependent Tool Script acting on selected element?s attributes is used for
the examples in the attached color plate entitled Modifying SML Tool Scripts
for New Applications.
Modifying
Scripts.
A new, larger SML
sample script collection is now provided as part of PV6.9
and updated via the weekly patches. These
provide some of the examples of how individual functions are used.
A function example may be a whole script or script subsection you can
insert and easily modify. However,
it is also important to become familiar with and search all of the sample script
categories to determine if one exists that can simply be modified to suit your
application. Minor modifications
that can even be undertaken by non-programmers can often change a sample script
into a useful tool, especially those that are data dependent.
The simple alterations to change a Tool Script to work with your specific
geodata are illustrated in the attached color plate entitled Modifying SML
Tool Scripts for New Applications (2 sided).
Debugging
Scripts.
Operations.
The debug procedure
has been expanded to assist you in perfecting your SML
scripts. If you have a script
loaded in the SML Edit window and
select Debug on the SML File menu,
the same script is loaded into the additional SML
Debugger window. The icon bar for
this window permits you to Run the script, Step through, Pause, Stop, Show
Pseudo Code, and Show Timing. Each
time you choose the Step icon the next line in the script is executed as
indicated by an arrow to the left of the line.
This Debug window and the following new features are illustrated and
discussed on the attached color plate entitled SML Debugger and Script Timer.
Breakpoints.
If your script is
long you might want to insert break points by clicking to the left of the script
line. A red ball symbol is inserted to mark the breakpoint (you can remove the
breakpoint later by clicking on this symbol).
When you run the script, it stops at the first breakpoint.
You can then use the Step icon to try the suspect lines following the
break point, or press Run again to have it run to the next breakpoint.
If your script includes other SML
scripts a square symbol is shown in the left margin next to each include
statement. Selecting this icon
expands the script (or pseudo-code) listing in the SML
Debugger window to show the included script code and permit you to step through
it.
Timing.
When you press the
Time icon, a Time column is inserted in the SML
Debugger window to the left of the script listing to show the time (in seconds)
required to execute each script line. If
you run the entire script, times are shown for each line.
If you step through the script, the time to execute each line is added in
sequence as you progress through your script.
In many cases carefully studying these times can lead you to areas where
you can improve the overall script performance by changing the logic in your
script (for example, to identify excessive and unnecessary nested loops,
inefficient computations, and so on).
Creating
Dialogs and Windows.
Template
for XML.
V6.8
introduced the use of XML as an alternate means of creating custom dialogs for
user interaction in your scripts. These
user interface components were introduced in detail in that Release MEMO (www.microimages.com/relnotes/v68/rel68.htm)
and are now the subject of your new Building Dialogs in SML tutorial
booklet. Some of you have expressed
an interest in using Xerlin, a free open-source XML editor (www.xerlin.org) to
compose your SML dialogs.
This in turn led a client to request that a widely used XML document
model be provided with the TNT
products to use in Xerlin and other editors.
The sample data directory (SMLDLG) accompanying this tutorial booklet now
includes a Document Type Definition (DTD) file (smlforms.dtd) that provides an SML
template for use in Xerlin and other editors.
The DTD file enables an XML editor to show the attributes available for
each interface component so you can set up the desired characteristics.
Using this template permits you to use all the many features of your
preferred XML editor while targeting your result toward creating the interface
for your SML script.
The use of this DTD template is discussed and illustrated in the attached
color plate entitled Validating SML Dialogs Created in XML.
Visual
Basic.
You can now also
use a separate Visual Basic program to create all the same dialog boxes and
interface features permitted by XML directly in your SML
script. This may provide you with several benefits.
1) You may already be familiar with Visual Basic.
2) You can use the interactive layout tools provided by Visual Basic
(Form Layout window) or some other software to design your interface.
Since the VB program can run independently from your SML
script, it can be used as a means of providing input and output from the SML
script, thus controlling some activities in the main TNT
product such as updating a table, redrawing, and so on.
However, this same interface program can be used to gain access to,
display, or edit data from some other source.
In the simplest use of this type it might simply retrieve and display
information from some other RDBMS to assist the user in responding to the form
or dialog required by the SML to
communicate with the TNT product.
The use of Visual
Basic (or C++ or Java) to create SML
interface elements has only recently been developed.
As a result, it is not yet included in the Building Dialogs in SML
booklet. However, the next revision
of this booklet will cover these topics. In
the meantime, the attached color plate entitled Build SML Dialogs Using
Visual Basic illustrates and discusses the identical dialog created in a
Visual Basic program and within SML
using XML.
SML
Accepts Callbacks from Other Programs (via ActiveX).
Introduction.
You can now write a
program in Visual Basic, Java, C++, ? that runs concurrently with and
communicates with an SML script
using a callback procedure via ActiveX. This
external program might be used for a wide variety of purposes.
A simple example would be to use a form presented by a Visual Basic
program to collect or edit data for a record from its user.
When the user selects OK to log this record, the VB program can continue
to run for new input or other communications with this user.
Meanwhile, the SML script has
been notified to update the Tabular or Single Record View in TNT.
This approach can be used to resolve the issue described in the section
above (Refresh Tabular Views of External RDBMS) that ODBC does not
automatically notify its clients (which means, TNT
via the ODBC link) that a table has been changed.
In this case, the Visual Basic program notifies the SML
script that it has changed a record in that RDBMS and SML
redraws the Tabular or Single Record View.
This table refresh
application is an easily understood example of what might be accomplished with
this callback approach. A simple
extension of this approach would be to have the SML
script redraw a pin mapped layer in the current TNT
View window to update it for the new or edited record created by the Visual
Basic program. This expanded Visual Basic program could be used to track
something that is moving and the SML
script could add pins for each new position (often called ?dropping
breadcrumbs or string?). Those
writing SML scripts should also
realize that a total redraw of the view is not needed in SML
to add or move a single pin!
Those interested in
using this new callback feature in their SML
scripts should keep in mind, as noted above, that you are not restricted to
using Visual Basic programs. The
examples showing modification of records in a table via a Visual Basic dialog
are simply easily understood applications. You may be experienced in using a
variety of other higher level programming languages.
This extension of SML will
permit you to implement complex specialized operations in your non-TNT,
non-SML programs that can now take
advantage of the supporting features provided by SML
and the visualization capabilities of a concurrently-running TNT
product (for example, TNTview).
Simple Example
Use.
Additional
discussion of this concept can be found in the attached color plate entitled ActiveX
Callbacks to SML. The Visual
Basic program in this example creates a form populated from a TNT
database record (for example, the ownership of a parcel).
The record is supplied to the form by an SML
Tool Script when the user selects a polygon in the TNT
view. This form can then be used to
edit the ownership data for the selected parcel.
You can download the materials to run this example (VBDEMO2.zip) from www.microimages.com/downloads/tool¯o.htm.
The Visual Basic dialog in this example is a modified version of the
database form used in a previous example discussed and illustrated in the plate
entitled Communicate with Visual Basic Programs using SML (available at
www.microimages.com/documentation/TechGuides/68visbasic.pdf).
The VBDEMO2
directory contains two sample SML
Tool Scripts and the Visual Basic materials.
To install (which means, register) the sample Visual Basic component
program, double-click on the file MicroImages_SML_OLE_Demo_EXE.exe.
The new sample SML script
ParcelToolModeless illustrates the use of callbacks between concurrently-running
SML and Visual Basic programs.
This script imports the Visual Basic class called VBform that defines the
database form and associated ActiveX events that are triggered by pressing the
Apply or Close buttons on the form. The
VBform class also has a class member for each of these buttons (SetOnApply and
SetOnClose) that are used in the SML
script to register the name of the SML
function (defined by the script writer elsewhere in the SML
script) to be activated when that button is pressed.
It is these callback functions in the SML
script that define what actions are to be taken by the SML
script when the respective control in the Visual Basic form is activated.
In this example, editing the database information shown for the selected
polygon on the Visual Basic form and pressing the form?s Apply button updates
the polygon database, notifies TNT
that the update has occurred, and turns off polygon highlighting during the
ensuing redraw of the TNT view.
Additional
Examples.
Additional examples
of using Visual Basic to communicate with SML
and the TNT products have been
developed but are not yet illustrated or documented.
These include several modified versions of the parcel polygon application
described above. In one version the
parcel information is in an Access database linked to the TNT
parcel layer via ODBC. The Visual
Basic form in this example gets the parcel ID for the selected polygon from the SML
Tool Script and uses it to retrieve the ownership data from the Access database
via ODBC. The user can then use the
form to modify the ownership record for that parcel, and these changes are also
communicated back to the view via the SML
Tool Script as in the previous example. Another
variation of this Visual Basic example uses the direct link, not ODBC, between
Microsoft?s Visual Basic and SQL Server to obtain and alter the ownership
records in SQL Server tables. Another
version of the Parcel Tool Script has been modified to zoom in on the selected
parcel polygon. If you are
interested in these additional sample procedures please contact MicroImages
software support staff.
Sample Scripts.
Batch
Imports.
Global image and
elevation coverage, high resolution images of a large area, and map features
released in map series units are some of the examples of geodata that are
available to you in 100s or even 1000s of small pieces.
Examples of this would be sets of orthophotos of a city, SRTM or USGS
DEMs, USGS DLGs, Vector Map Level 1 (VMap1), and other feature sets.
These materials are parceled out in bite-sized pieces so that they can be
sold in these small units, delivered on CDs or DVDs, conveniently downloaded via
the Internet, and/or used in free but limited software.
They are also delivered in small units to facilitate their use in
commercial software products that can not handle large integrated geodata sets.
As a professional
geospatial analyst, you are encountering project opportunities in assembling
these data sets in TNTmips for use
as a base for a project in a larger area, or to modify, improve, or add to them.
Often the most efficient approach would be to import them into many
objects in a Project File and then mosaic or merge them into a single object
prior to any further activity.
Some formats can be
linked to and then directly mosaicked into a single raster object. However, if
the data tiles are in a new and unusual format, you can automate their
repetitive import using an SML
script. You may also want to
automate additional processing (changing their cell size, map projection,
georeferencing, ...) in this same script if it is more conveniently accomplished
on these small units as they are imported.
Using an SML script is a good
way to automate this kind of import, especially if the source geodata are
periodically changing, in a unique format, require a change in data type (for
example, 8-bit to 16-bit integer), and/or are in many tiles each of differing
data ranges.
The preliminary
release via the Internet of the Space Shuttle SRTM elevation data in 1201 by
1201-cell tiles is a good example of this kind of SML
application. These tiles are being released by continent.
South America
has 1813 tiles and these can be
downloaded easily via the Internet (ftp://edcsgs9.cr.usgs.gov/pub/data/srtm/).
However, these tiles are subject to periodic changes as some large
no-data areas and details in mountainous areas are filled in from other data
sources, smaller flaws are corrected by further analysis of the source SRTM data
or by filtering (smoothing) the tiles. This
is the kind of situation where writing or modifying a standalone SML
script is the best way to repetitively import these raster objects by continent.
The attached color plate entitled Automate Batch Imports Using SML
describes an SML script created as a
model for your batch import scripts. The
script is explained on the reverse side of this plate.
This short script can be downloaded from the MicroImages? SML
Sample Scripts collection at www.microimages.com/downloads/scripts.htm.
Use it as a model to create your own batch import script for any raster,
vector, or other geodata import supported by the TNT
products. It is also a good shell
or model for any script you must create to import some unique geodata format you
encounter or a format not handled by the TNT
products.
Large Complex
Scripts.
For Process
Applications.
Not all SML
scripts are compact and simple. Some
large scripts have been created to handle repetitive ?production line?
approaches where a long repetitive proprietary sequence of steps is automated to
create a production line process requiring little or no human intervention.
An example of this is one satellite image acquisition company?s
implementation of a large SML script
that starts with access to their original image archive, ingests their
customer?s local area of interest, adjusts local georeference if needed
(requires human intervention), calibrates to reflectance, processes into very
specific application products, applies contrasting and color calibration, and
exports to TIFF and other formats as well as bundled with a linked TNTatlas.
The final product
produced in this application is time sensitive, and prompt delivery is critical.
This SML production line
approach, including any human input, is completed and the product available for
Internet access or shipment on a CD 2 hours after the newly acquired satellite
image is processed and made available in the company?s image archive.
It has been in use for more than a year and is repeated multiple times a
day using more than 1 TNTmips
system. It is continually being
improved by its authors to improve their products and as new SML
capabilities are added.
For
Experimental or Scientific Applications.
The largest public
script made available to MicroImages has been written by Mr. Ralf Koller, a
graduate student in geography, for his masters thesis at Friedrich Alexander
Universität, Erlangen-Nurnberg,
Germany .
He has his own TNTlite
installed at home on his Mac OS X computer and also uses the professional TNTmips
Windows version available via this university?s Special Academic License. Ralf
was very effective at communicating and interacting with MicroImages? support
software engineers while creating this script and gets an ?A? grade from
MicroImages for this. His SML
script has about 83,000 lines and is designed to calibrate and normalize a
collection of four multitemporal Landsat multispectral images.
Any one of the four scenes can be in one of several possible formats, and
accommodating these processing options contributed to the length and complexity
of the script. Despite this
complexity, the script runs equally well in Windows, Mac OS X, and Linux since
all use the same SML.
Ralf reports that he has processed a 1309 by 2081-cell extract of four
scenes in about 2 hours and 15 minutes on an older Pentium III computer (866
MHz, 156 MB of RAM and 4 GB of virtual memory).
This script is introduced and illustrated in the attached color plate
entitled Calibrate Multitemporal Landsat Scenes via SML.
The complete script can be downloaded from the MicroImages? SML
Sample Scripts collection at www.microimages.com/downloads/scripts.htm.
The script can be run in TNTlite
provided that none of the input image bands exceed the raster dimension limits
imposed in TNTlite.
SML Additions.
New
Functions.
Database
functions. (1)
TableTriggerRecordChangedCallback(
)
Allows
an SML script to notify an RVC
database that a linked table was modified by an external source.
Raster
functions. (3)
RasterCopy(
)
Copies
a raster or a subarea and all its subobjects (georeference, contrast, colormaps,
etc.)
RasterFloodFill
( )
Flood
fill an area of a raster.
RasterGetSolidAreaSize(
)
Compute
the size (in pixels) of a raster area that would be filled by a flood fill.
Geodata Display
functions. (1)
GroupQuickAddRGBRastersVar(
)
Add
a GRE_RASTER_LAYER to a GRE_GROUP given 3 raster variables (one for each RGB
component).
Drawing
functions. (1)
LineStyleSetRect(
)
CartoScript
function to draw a rectangle.
Modified
Functions.
CreateHistogram()
function now has a parameter to set a sampling interval.
The
RenderToRaster function now renders to a 4-bit or 8-bit palette, color raster
for use in color separation.
Navigate
to and select a table (see POPUP functions).
Refresh
a database table view (see Database functions).
GroupQuickAddRGBRasters
has been modified to create an RGB raster layer in a display group from 3
separate rasters.
New Classes.
BITSET
An
array of bits that can be resized.
BITSET_ITERATOR
An
iterator to step forward through all selected items in a BITSET.
BITSET_UNOWNED
An
array of bits.
CONTRAST
A
contrast object for raster display.
GRE_LAYER_SCRIPT
Script
layer.
GUI_CANVAS
Canvas
control to support drawing.
LABELFRAMEPARMS
Frame
for displayed labels.
TNTSIM3D
The
interface class for TNTsim3D.
Modified
Classes (new methods added).
DBFIELDINFO
can now get the field type, its width, and the number of decimal places.
VECTORLAYER
can now get the currently selected (highlighted) elements.
MieUSERDEFINEDRASTER
now permits setting lines, columns, data type, ?
MieTIFF
can now export 16-bit grayscale rasters to TIFF.
Generate
a contrast subobject (see Contrast class)
Miscellaneous.
The documentation
has been altered to help you locate all of the geospatial rendering engine
classes for layers, groups, and views that now start with GRE_.
For this purpose old classes were renamed with the prefix GRE_ so that
the old and new classes will be presented together when the class list is sorted
alphabetically. However, older
scripts that use the old class names will continue to work in existing scripts.
C-style
syntax can be used in ?for? loops.
Upgrading
TNTmips.
If you did not
purchase RV6.9 of TNTmips
in advance and wish to do so now, please contact MicroImages by FAX, phone, or
email to arrange to purchase this version. When you have completed your purchase
you will be provided with an authorization code by FAX.
Entering this authorization code while running the installation process
lets you to complete the installation of TNTmips
RV6.9.
The prices for
upgrading from earlier versions of TNTmips
are outlined below. Please
remember that new features have been added to TNTmips
with each new release. Thus, the
older your version of TNTmips
relative to RV6.9, the higher your
upgrade cost will be.
Within the NAFTA
point-of-use area (Canada, U.S., and
Mexico) and with shipping by UPS ground.
(+150/each means US$150 for each additional upgrade increment.)
| TNTmips
Product |
Price to upgrade from TNTmips: |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$500 |
750 |
950 |
1100 |
1250 |
+150/each |
|
for 1-user floating |
US$600 |
900 |
1140 |
1320 |
1500 |
+180/each |
| UNIX
for 1-fixed license |
US$800 |
1250 |
1650 |
2000 |
2250 |
+200/each |
|
for 1-user floating |
US$960 |
1500 |
1980 |
2400 |
2700 |
+240/each |
For a point-of-use
in all other nations with shipping by air express.
(+150/each means US$150 for each additional upgrade increment.)
| TNTmips
Product |
Price to upgrade from TNTmips: |
V6.30 |
|
V6.80 |
V6.70 |
V6.60 |
V6.50 |
V6.40 |
and earlier |
| Windows/Mac/Linux |
US$600 |
900 |
1150 |
1400 |
1600 |
+150/each |
|
for 1-user floating |
US$720 |
1080 |
1380 |
1680 |
1920 |
+180/each |
| UNIX
for 1-fixed license |
US$900 |
1400 |
1850 |
2200 |
2500 |
+200/each |
|
for 1-user floating |
US$1080 |
1680 |
2220 |
2640 |
3000 |
+240/each |
Maintaining
Translations.
The 5 language
resource files (interface, messages, help, ?) have been merged into 1
reference file for easier management.
A new multilingual
editor is provided to official translators for maintaining this reference file.
It is illustrated and discussed in the attached color plate entitled New
Text Localization Utility.
Available
Languages.
The current
chart of language interface packages for the TNT
products is at www.microimages.com/i18n/locales/.
These packages for RV6.9 are
just now becoming available due to the changes MicroImages has made to
streamline the process of maintaining these translations which in turn created
temporary impediments for our official translators.
Pending
Languages.
Afrikaans.
An agreement has been signed with a TNTmips
user in South Africa to provide a TNT
language resource file for the use of the TNT
products in Afrikaans.
Norwegian.
Discussion is underway for a possible translation of the TNT
language resource file into Norwegian.
Note!
If your language is missing for V6.8
or earlier please contact MicroImages if you wish to discuss becoming its
official technical translator.
The following 6 new
Resellers were authorized to sell MicroImages? products since RV6.8 shipped.
|
SLOVAKIA. |
|
|
| Spisska
Nova Ves |
|
|
| Koral
s.r.o. |
|
|
|
Slavomir Daniel |
voice:
|
(4219)6544-11834 |
|
Sladkovicova 5 |
FAX:
|
(4219)6544-11834 |
|
Spisska Nova Ves 05201 |
email: |
daniel@koral.sk |
|
|
Slovakia
|
|
|
|
SERBIAand
MONTENEGRO. |
|
|
| Belgrade |
|
|
| PrimaRes
d.o.o. |
|
|
|
Jasmin Babic
|
voice: |
(3811)1444-4302 |
|
Njegoseva at Maksima Gorkog |
FAX: |
(3811)1444-4302 |
|
Belgrade
11000 |
email: |
office@primares.co.yu |
| |
Serbia
and Montenegro
|
web: |
www.primares.co.y |
|
BOSNIA
and HERZEGOVINA. |
|
|
| Banja
Luka |
|
|
| Geo-centar
d.o.o. - BiH |
|
|
|
Vladimir Petrovic |
voice: |
(3875)121-1580 |
|
Veselina Maslese 1 |
FAX:
|
(3875)121-1580 |
|
Banja Luka
78000 |
email: |
geo-centar@spinter.net |
| |
Bosnia and
Herzegovina
|
web: |
www.geocentar-bih.com |
|
CANADA. |
|
|
| Richmond |
|
|
| GlobalPoint
Technologies |
|
|
|
Steven Ge |
voice: |
(604)207-7770 |
|
South Tower, Suite 305 |
FAX: |
(604)207-9552 |
|
5811 Cooney Road
|
email: |
s_ge@sbcglobal.net |
| |
Richmond,
BC V6X 3M1
|
|
|
| |
Canada
|
|
|
|
EGYPT |
|
|
| Cairo |
|
|
| iTarget |
|
|
|
Sherif Khattab |
voice: |
(202)644-5666 |
|
179 El Nozha Street |
FAX:
|
(202)635-7288 |
|
Heliopolis,
Cairo |
email:
|
d_khalil@hotmail.com |
| |
Egypt
|
|
|
|
UGANDA |
|
|
| Kampala |
|
|
| MGGS
Ltd. (Muienr Geo-Graphics Service) |
|
|
|
Samuel Mugisha
|
voice: |
(2564)153-0135 |
|
P.O. Box 6072 |
FAX: |
(2564)153-0134 |
|
Kampala
|
email: |
smugisha@muienr.mak.ac.ug |
| |
Uganda
|
|
|
The following
resellers are no longer authorized to sell MicroImages? products.
Please do not contact them regarding support, service, or information.
Please contact MicroImages directly or one of the other MicroImages
Authorized Resellers.
Argentina.
PROCON.
[Ruben Actis Danna] located in Cordoba
is discontinued.
Brazil.
Geosat
Ltda. [Oscar Hoogenboom] located in
Sao Paulo is discontinued.
India.
Physical
Planning Consultants. [Dilip Kumar
Paul] located in Calcutta
is discontinued.
MicroNet
Solutions. [Dheerja Mehra] located in
Nagpur is discontinued.
Landends
Solutions. [Praveen Ummadi] located in Hyderabad
is discontinued.
Slovakia.
GeoComplex.
[Jaroslav Gretsch] located in Bratislava
is discontinued.
For
simplicity, the following abbreviations were used in this MEMO:
RV6.9 = the
official and first release of V6.9
of the TNT products matching the
version on the CDs distributed.
PV6.9 = any version
of the TNT products created
subsequent to RV6.9 to which patches
have been applied to update RV6.9 or
a PV6.9.
DV7.0 = The
partially complete development version of the TNT
products which will be eventually be officially released as V7.0
when complete.
W95
= Microsoft Windows 95.
W98
= Microsoft Windows 98.
WME
= Windows Millennium Edition.
NT
or NT4 = Microsoft NT 4.0 (the TNT
products require the use of NT4.0 and its subsequent Service Packs).
NT4 now has a Service Pack 6a available.
Windows 2000 now has Service Pack 4, which is recommended if you are
working with large files.
W2000
= Microsoft Windows 2000.
XP
= Microsoft Windows XP.
Mac
10.3.2 = Apple Macintosh using Mac OS
X version 10.3.2.
MI/X
= MicroImages? X Server for
Windows platforms and operating systems.
GRE
= MicroImages? Geospatial Rendering Engine, that is at the heart of most
MicroImages products. The current GRE
will respond and render requests from either X/Motif or Windows.
rpm
: revolutions per minute.
KB
= kilobyte (1,000 bytes or 1,024 [210]
bytes)
MB
= megabyte (1,000,000 bytes or 1,048,576 [220]
bytes)
GB
= gigabyte (1000 megabytes or 1,073,741,824 [230]
bytes)
TB
= terabyte (1000 gigabytes or 1,099,511,627,776 [240]
bytes
©MicroImages, Inc. 2013 Published in the United States of America
11th Floor - Sharp Tower, 206 South 13th Street, Lincoln NE 68508-2010 USA
Business & Sales: (402)477-9554 Support: (402)477-9562 Fax: (402)477-9559
Business info@microimages.com
Support support@microimages.com
Web webmaster@microimages.com
|