More

How can I generate a high resolution rendering of the globe?

How can I generate a high resolution rendering of the globe?


I am building an ANSI E poster size map collection and would like to add a nice rendering of the globe to the maps indicating the approximate position of the region on the globe.

The problem is that I need the globe rendering to be high quality, high resolution image. I think minimum 300 or 400DPI @ about 12" x 12" inches so that I can work with it in CorelDRAW where I will be creating the final product.

I have Google Earth (Free), NASA World Wind and ArcGLobe but neither software is capable of producing a satisfactory result. All look OK but the resolution of the image is not a press quality.

So far Google Earth seems best (although terrain relief would be better then the spotty imagery), followed by NASA World Wind and legging behind is ArcGlobe. (Given the price tag, one would expect ArcGlobe to deliver but I am fully aware of the ArcScene exporter problems so I am not very surprised.)

If someone could recommend a virtual globe (free) program that I can use to do this that would be great. Other options are welcomed as well; the globe does not have to be dynamic but it has to focus on Canada, centered on Ontario.

Here are the outputs from the above mentioned programs:

Google Earth NASA World Wind ESRI ArcGlobe 10


Render your own using POV-Ray, a ray casting engine that can make images of any arbitrary size. Here's an example earth centered on Southern California: http://geohack.net/earth/sb-2048.png">excellent guide for details on the approach.


The highest-resolution, seamless and cloudless imagery you're going to find is from NASA's Visible Earth.


I'm not sure if that's what you are looking for, but have a look at:

  • Google's WebGL Globe
  • WebGL Earth
  • DynViz

All via Information Aesthetics Weblog


Would you be able to do it using Natural Earth data? I'm not sure if it would work, but you could shade terrain based on height, and bathymetry based on depth.


Does bumping DPI improve the print quality?

I am about to publish a book and after uplaod the contents, I was notified various errors esp about the images that were less than 300 DPI (115 dpi, 150 dpi, 200 dpi). I was told that they can can accept only 300 DPI pics as it is the best for printing.

Now first my images weren't really of that quality as they are simply images of scan documents but I tried to comply and used image size feature in photoshop to increase dpi to 300, but that made size of the images too big (like 40 MB one image). Am I suppose to resize dimension than after changing DPI?

My question is If i change DPI to 300 and leave everything else same, the file size is going to be huge. What is the recommended setting of the image dimension for printout on A4 paper and does by increasing DPI, the print quality is going to be affected? Because I print the same content on my home printer and they come absolutely fine.


3D Web-Mapping

While Google Earth is the best known 3D web-mapping viewer currently in the public domain, there are alternative virtual globe viewers available. In particular, this article focuses on the NASA World Wind viewer, using OGC data-dissemination standards and multi-beam sonar data from the Irish National Seabed Survey to illustrate its potential.

The creation of a range of innovative GIS tools and systems is enabling the harmonisation and electronic sharing of geospatial data and services across distributed networks. 3D web mapping viewers, such as NASA World Wind, are perhaps some of the most visible of these products.

OGC Standards
Such developments are putting in place some of the key building blocks that underpin the development of Spatial Data Infrastructures (SDIs). The Open Geospatial Consortium (򖈱) has published a number of interoperability standards ('OpenGIS' specifications) which are major components of SDI. Important OGC standards include Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). The WMS standard facilitates web-based dissemination of map imagery, while the WFS and WCS standards facilitate the dissemination of feature and coverage/raster data respectively.

MarineGrid Research
In the September 2006 edition of Hydro International (volume 10, issue 7) we demonstrated the use of the WMS standard as part of the MarineGrid research project. This project is funded by the Irish Higher Education Authority and involves researchers from the National University of Ireland, Galway (NUIG) and the Coastal and Marine Resources Centre at University College Cork (UCC). An in-house WMS server was developed to disseminate shaded relief bathymetric imagery of Dublin Bay from the Irish National Seabed Survey. The article illustrated how it is possible to integrate large imagery datasets into Google Earth using WMS and KML (Keyhole Markup Language). However, it is possible to integrate WMS imagery into other 3D viewers too.

NASA World Wind
NASA World Wind is an interactive, web-enabled 3D-globe viewer. It was first released by NASA's Learn-ing Technologies project in August 2004 and is similar to the Google Earth viewer, where you can zoom in to any place on Earth. However, the viewers are aimed at different audiences. World Wind was originally designed as a scientific educational tool, while Google Earth was designed as a geographic location-based search tool. Both viewers have their own unique capabilities. Google Earth principally uses commercial high-resolution satellite/aerial imagery, while World Wind uses public-domain satellite/aerial imagery. World Wind also supports 3D-globe visualisation of other planets such as Venus and Mars, and the Moon. World Wind is open-source, allowing developers to modify the source code or develop plug-in or add-on tools. For example, free plug-ins have been developed that can access imagery and maps from the French Géoportail and Microsoft's Virtual Earth, although in these cases data licensing can be more restricted.

WMS Streaming
Similar to Google Earth, World Wind downloads small portions of imagery corresponding to a user's virtual view, using a hierarchical tiling technique. For any given area, regional low-resolution tiles are first downloaded, followed by medium-resolution tiles and finally local high-resolution tiles. Hierarchical tiling enables the visualisation of multi-terabyte datasets. Once downloaded, the data is cached on local disk. Due to the open and public nature of World Wind this cache is not encrypted. World Wind directly supports WMS delivery of data. It is therefore possible to connect the viewer with any WMS-compliant service. This is accomplished by configuring a single configuration file, i.e. based on the @Images.xml file. This was done for the MarineGrid project, where the reconfigured World Wind viewer can directly access shaded relief imagery from the Dublin Bay WMS server.

Elevation Streaming
World Wind supports 3D visualisation of both land and ocean. It is therefore possible to explore the Earth’s mountains or the oceans’ continental slopes in 3D. This terrain data is streamed from a NASA server as required, also using a hierarchical tiling technique. The data is sourced from the SRTM+ dataset, a relatively low-resolution dataset. While the WMS standard supports map-imagery streaming, the WCS standard can support elevation-data streaming. However, World Wind does not use this standard. Instead, terrain tiles are pre-built, placed on a NASA server and accessed through a non-WCS compliant web service. As World Wind is open-source, the API to this terrain web service is known.

High-Resolution
A useful World Wind feature is the ability to stream higher spatial-resolution elevation models with up to one-metre vertical resolution using the terrain API. A 'posting' tool for generating terrain tiles is available online. As part of the MarineGrid project, this tool was used to generate high-resolution bathymetry tiles for Dublin Bay. These tiles were placed on an in-house server adhering to the terrain API. After reconfiguration (i.e. the Earth.xml file), the viewer automatically streams higher-reso-lution terrain from the in-house server. Figure 2 illustrates the imported seabed terrain for a region of Dublin Bay. Note the cursor in the centre of the screen that identifies the underlying seabed as thirty metres in depth.

Time-Series WMS
World Wind supports time-series WMS imagery through the 'WMS browser' feature, where time-series imagery is visualised as an animation sequence. However, this feature does not use a hierarchical tiling technique. Therefore, it does not support the visualisation of large time-series im-agery datasets. It is possible to connect the 'WMS browser' to a WMS-compliant service storing time-series data, i.e. the wms_server_list.xml configuration file. The browser automatically builds a list of available data
layers from the remote server through a WMS GetCapabilities request. This feature has also been used by MarineGrid to visualise time-series imagery generated from a 4D hydrodynamic model of the Northeast Atlantic. Figure 3 illustrates sea temperature animation at one thousand-metre depth. Support for legends, e.g. temperature scale, is also supported but not illustrated here.

Vector Support
Support for importing vector data into World Wind is currently limited. For example, vector objects such as polygons are not rendered directly but instead rasterised and rendered as a texture. Basic ESRI Shapefile and KML are supported, but these features are still under development. Better vector-object rendering is required and planned for future re-leases. WFS support is also planned.

Other 3D Viewers
ArcGIS Explorer is an upcoming 3D web-mapping viewer developed by ESRI. It is still in beta production but will be available with the forthcoming release of ESRI ArcGIS 9.2. The viewer streams data from ESRI proprietary servers: ArcGIS Server, ArcIMS and ArcWeb services. However, it also supports some open standards: the WMS standard and the de-facto KML standard. Finally, an official 3D version of the French Géoportail is expected next year. The list of web-enabled virtual globe viewers is growing.

Concluding Remarks
Three-dimensional web-mapping viewers using a virtual globe to portray the world have helped to popularise GIS, transforming a flat 2D world into a more stimulating experience for the general public. Along with faster computers, their success has been greatly helped by the broadening availability of broadband, helping faster data streaming. Viewers like NASA World Wind have helped to highlight and demonstrate the importance of OGC standards in building SDIs. However, there is a need for viewers to fully support these standards. As the existing features of the current wave of 3D web-mapping products are strengthened, it is important to look at their future evolution. Support for metadata is very important. How often does a user ask how old is this satellite image of my city? True 3D support is also required. The ocean is not a surface but a volumetric space. For example, to better visualise and interpret a volumetric hydrodynamic model, slicing and profiling tools are needed. The wish list for new features is probably endless. However, it is the marketplace that will determine the additional functionality that is actually realised.

Acknowledgments
Thanks to the Irish National Seabed Survey team for providing the Dublin Bay multi-beam data. This survey is managed in partnership with the Geological Survey of Ireland (GSI) and the Marine Institute and funded by the Irish Government. Thanks to Ahed Al-Sarraj, Civil Engineering Department, National University of Ireland, Galway for developing and providing the 4D hydrodynamic model data of the Northeast Atlantic. Thanks to ESRI Ireland for producing the ArcGIS Explorer image from their test environment.


Collimation

The concept of collimation is relatively straightforward with single–detector row CT. With the single–detector row technique, collimation refers to the act of controlling beam size with a metallic aperture near the tube, thereby determining the amount of tissue exposed to the x-ray beam as the tube rotates around the patient ( , 1, , 2). Thus, in single–detector row CT, there is a direct relationship between collimation and section thickness. Because the term collimation may be used in several different ways in multi–detector row CT, it is important to distinguish between beam collimation and section collimation.

Beam Collimation

Beam collimation is the application of the same concept of collimation from single–detector row CT to multi–detector row CT. A collimator near the x-ray tube is adjusted to determine the size of the beam directed through the patient. Because multiple channels of data are acquired simultaneously, beam collimation is usually larger than reconstructed section thickness ( , 3).

When a 16-channel scanner is used, for example, one of two settings is selected for most applications ( , Fig 1 , ). Narrow collimation exposes only the central small detector elements. The data acquisition system controls the circuits that transmit data from the detector and collects data only from the intended elements ( , 4, , 5). Wider collimation may expose the entire detector array. Unlike narrow collimation, in which the central elements are sampled individually, with wide collimation the 16 central elements are paired or binned, providing data as if they were eight larger elements ( , 6). The four additional larger elements on each end of the detector array then complete the total of 16 channels of data. In this example, beam collimation would be 10 mm in the narrow setting or 20 mm in the wide setting.

Because beam collimation combined with table translocation determines the amount of z-axis coverage per rotation, it also helps determine the length of tissue or “volume coverage” that can be scanned within a given period ( , 3). Larger beam collimation allows greater volume coverage within the time constraints of a given breath-hold or contrast material injection. An important point is that, as with single–detector row CT, narrow collimation in four- and 16-channel multi–detector row CT typically results in higher radiation dose to the patient compared with wide collimation ( , 7, , 8).

Section Collimation

The concept of section collimation is more complex but vital to understanding the potential of multi–detector row CT. One of the key components of multi–detector row CT is a detector array that allows partition of the incident x-ray beam into multiple subdivided channels of data ( , 3). Section collimation defines the acquisition according to the small axial sections that can be reconstructed from the data as determined by how the individual detector elements are used to channel data. As opposed to beam collimation, which determines volume coverage, section collimation determines the minimal section thickness that can be reconstructed from a given data acquisition.

Using the earlier example of a 16-channel scanner, let us assume that the small central detector elements are 0.625 mm and the large peripheral elements are 1.25 mm. The size of the elements exposed and the way in which data are sampled from them by the data acquisition system determine the physical properties of the projection data used to generate axial images ( , 4, , 6, , 8). When narrow collimation is applied (in this example, an incident beam width of 10 mm), the central small detector elements are treated individually by the data acquisition system ( , Fig 2 , ). This form of acquisition permits reconstruction of axial sections as small as the central detector elements, or a section collimation of 0.625 mm.

When wide beam collimation (20 mm in this example) is used, the central elements are coupled so that two 0.625-mm elements are sampled as a single 1.25-mm element and the peripheral 1.25-mm elements are sampled individually, resulting in a section collimation of 1.25 mm. As a result, axial sections cannot be reconstructed smaller than 1.25 mm. Thus, section collimation is defined by the effective size of the channels of data sampled by the data acquisition system (the individual or coupled detector elements) and determines the minimum section thickness that can be reconstructed in a given acquisition mode. “Effective detector row thickness” is another term that has been used to describe section collimation ( , 8).

If a routine abdominal examination interpreted at 5-mm section thickness reveals a finding and the radiologist or surgeon would like detailed coronal images, the section collimation determines whether the data can be reconstructed to 0.625-mm or 1.25-mm section thickness to provide a new data set for the reformatted images. Although it may be tempting to use the smallest section collimation available routinely, this may increase radiation dose to the patient (particularly with four- to 16-channel scanners) ( , 7, , 8). Thus, section collimation is an important consideration in designing protocols with multi–detector row CT, as the anticipated need for isotropic data must be balanced with radiation dose considerations.

Section collimation and the quantity of data channels used during data acquisition are described by the term “detector configuration.” For example, the detector configuration for a 16-channel scanner acquiring 16 channels of data, each 0.625 mm thick, is described as 16 × 0.625 mm. The same scanner could also acquire data by using different detector configurations, including 16 × 1.25 mm and 8 × 2.5 mm. The detector configuration also describes the relationship between section and beam collimation, since beam collimation can be calculated as the product of the section collimation and the number of data channels used ( , 5, , 8).

Although section profiles for thin and thick collimation vary among different vendors, the general principles are applicable to all scanners. Correlation between beam collimation and section collimation on different types of 16-channel scanners is shown in the , Table.


3 Results

3.1 Orthophoto Comparison

SfM camera scenarios 1–3 generated visually similar orthophotos both among themselves and with the aerially acquired orthophoto (Figures 1a and 1b). While the spectral characteristics (i.e., color scheme) and environmental conditions associated with the collection of the SfM and aerial orthophoto were different, the underlying topographic pattern (e.g., vegetation distribution and stream position) was essentially the same. That is, deviations between the two techniques could be attributed to differences in scale and the underlying elevation information used to geometrically correct the aerial orthophoto. The native resolution of the SfM orthophotos was less than 0.02 m, compared to the 0.30 m of the aerial photo.

3.2 GPS and SfM-MVS Surveys

The median elevation surveyed with the GPS is 2362.37 m (range: 2360.42–2366.09 m) and the MAD is 0.730 m. A 1.00 m resolution DEM derived from the GPS survey (Figure 3a) illustrates the microform of the site. SfM-MVS data were processed in

24 h per camera scenario, using 12 GPUs. The horizontal and vertical root mean squared errors (RMSE) are 2.06 and 0.052 m for camera scenario 1, 0.03 and 0.084 m for camera scenarios 2, and 0.03 and 0.082 m for scenario 3, respectively. After prefiltering, the point cloud densities of each scenario, although variable, were

3.3 Comparison of SfM-MVS and RTK GPS Elevation Data

The Wilcoxon test indicated that 114 (35%) of the 324 SfM-MVS scenarios generated were not significantly different from the GPS elevation data, meaning they produced similar elevations. Of the scenarios found to be similar with the GPS data, none of those associated with camera scenario 1 (camera position only) or aggressive prefilter scenarios were significant. The camera scenarios using GCPs were statistically different from the non-GCP scenario but were not significantly different from one another. Similar findings occurred with the aggressive prefilter. Additionally, it was found that scenarios using the minimum elevation of a given resolution were less likely to generate data that was similar to those of the GPS, while the mean and maximum values were equally likely to produce elevation values similar to the GPS data. Other processing steps were equally likely to generate SfM-MVS data that were similar or dissimilar to the GPS data.

For those scenarios similar to the GPS data, point-wise individual differences were calculated to examine the bias-corrected errors between the GPS and SfM-MVS values. A visual inspection of the SfM-MVS errors (i.e., quantile-quantile plots and histograms) indicated they approximated a normal distribution. However, in an effort to not artificially constrain the error through the assumption of normality, nonparametric measures of variance and comparative techniques were used, rather than their parametric counterparts (e.g., t test and standard deviation). Figure 5 provides an illustration of observed errors, characterized using MAD for the error bars, as well as the median offset required prior to comparison with the GPS data. The minimum and maximum MAD values observed were 0.09 and 0.13 m, respectively. The minimum and maximum median offsets were 0.23 and 0.54 m, respectively.

Significant differences were found when comparing MAD and median offset values between the different processing scenarios (Table 1). Resolution scenarios were found to differ by

0.02 m, with the 2.00 m data being significantly less accurate than all but the 1.00 m scenario. The 0.05 and 0.10 m resolution data were the most accurate, producing MAD values of 0.09 m. Similarly, the 2.00 m scenario was most different from the finer resolution data and supported the largest offset of 0.41 m. The smallest offset was associated with the 0.10 m data at 0.34 m. The most accurate elevation scenarios were those that used mean values, while the least accurate used maximum values. There was an increasing median offset from the minimum to maximum elevation scenarios, where the minimum scenario required the smallest offset.

Scenario Subscenario N Median Absolute Deviation (m) Median Offset (m)
Resolution
a 0.05 m 22 0.091*** *** p < < 0.01
(e,f)
0.342* * p < 0.05
(e), *** *** p < < 0.01
(f)
b 0.10 m 22 0.091*** *** p < < 0.01
(e,f)
0.341* * p < 0.05
(e), ** ** p < 0.01
(f)
c 0.25 m 20 0.094*** *** p < < 0.01
(f)
0.342*** *** p < < 0.01
(f)
d 0.50 m 18 0.097*** *** p < < 0.01
(f)
0.346
e 1.00 m 16 0.100 0.402
f 2.00 m 16 0.111 0.436
Elevation
g Minimum 18 0.096 0.314*** *** p < < 0.01
(h,i)
h Mean 48 0.091*** *** p < < 0.01
(i)
0.342*** *** p < < 0.01
(i)
i Maximum 18 0.097 0.408
  • a Asterisks/letters are used to illustrate which subscenarios are different. Only the first pair that is different is indicated (i.e., the 5 cm resolution subscenario is statistically different from the 1 and 2 m subscenarios, but only the 5 cm subscenario is marked). The median MAD and median offset for all scenarios was 0.093 and 0.347 m, respectively.
  • * p < 0.05
  • ** p < 0.01
  • *** p < < 0.01

Since there were a number of SfM-MVS scenarios that produced elevation data similar to that of the GPS survey (Figure 4), the utility of this method for quantifying microform elevation is illustrated through a comparison of histograms between the GPS data and one of the SfM-MVS scenarios, at multiple resolutions (Figure 5). The highlighted scenario utilized both GCP and camera positions (i.e., camera scenario 2), a mild prefilter, our postfilter, and the mean point cloud elevation, hereafter referred to as the “vegetation scenario.” Data from the different resolutions were colocated with the GPS data, to ensure comparable spatial scales (i.e., the SfM-MVS data was spatially joined with the GPS data according to its nearest neighbor). The SfM-MVS data were similar to the GPS data across all resolutions.

3.4 Spatial Error Analysis

To understand spatial error, SfM data from the vegetation scenario were compared to the GPS survey, but with limited processing steps (i.e., no median shift, no postfiltering) to better convey the kinds of error one might expect from point clouds (Figure 6) used directly from SfM-MVS software (Figure 7). Results show clear patterns of spatial variation in errors, suggesting the occurrence of systematic error (Figure 7). Errors were higher in the northeastern portion of the alpine peatland and lower (and often negative) in the southwest. The distinct areas of high and low error were often, but not always, associated with vegetation types (see Figure 1c). For example, two noisy “strips” of high error bisected the peatland vertically (center to left) and diagonally (lower center to upper right). Sometimes, where abrupt changes in slope occur such as at the margin of the peatland, errors were also high.

3.5 Vegetation Influences on Error

To explore the mixed effect of vegetation and resolution on error, the vegetation scenario was again used, but with postprocessing. Both vegetation and resolution were found to significantly impact error of the SfM-MVS data, as demonstrated by distinct clustering of errors (Figure 8). Generally, the short vegetation classes were associated with the lowest MAD values. This is best represented by the short herbaceous vegetation class, which supported the lowest median MAD (0.061 m) across resolutions. However, the lowest MAD across all vegetation and resolutions was 0.058 m, which was associated with the closed tall woody vegetation class at a resolution of 0.10 m. The maximum MAD (0.143) was associated with the closed short woody vegetation class at a resolution of 2.00 m. The 2.00 m resolution data also produced the largest median error in almost all cases and tended to be statistically different from finer resolution data. Other resolutions tended to not be significantly different from one another.

Some clustering was observed in the median offset. Most noticeable is the division of offsets above and below

0.35 m. These offsets can be roughly organized between those vegetation classes that are closed or short, versus those that are tall or open. This height and openness division is reflected in the maximum (0.469 m) and minimum (0.277 m) median offsets observed. The maximum value was associated with the open herbaceous depression vegetation class and a resolution of 2.00 m. The minimum median offset, on the other hand, was observed in the open short herbaceous vegetation class at both the 0.05 and 0.10 m resolutions. The open short herbaceous vegetation class also generated an average median offset of 0.280 m across resolutions, which was the lowest of all vegetation classes.


PHYSICAL PROCESS MODELS

The physical system serves as the environment in which the social system evolves. Physical process models are executable descriptions of our understanding of atmospheric, oceanic, hydrologic, geologic, and other physical systems. Many of these processes lend themselves to geospatial analysis. Physical process models developed or used by NGA include those used to generate high-resolution representations of Earth&rsquos magnetic and gravitational potential (Pavlis et al., 2012 see Figure 1.2). While grounded in the simulation of specific natural phenomena, physical process models often also supply information on the impact of environmental dynamics on human infrastructure, activities, and demographics. For example, the megacities intelligence question (see Box 1.2) would

1 See NGA Academic Research Program Symposium programs for 2005&ndash2015.

BOX 4.1 NGA Partnerships with Universities

NGA has established relationships with dozens of colleges and universities, including historically black colleges and universities, for recruiting and continuing education purposes (NRC, 2013). In addition, NGA has selected a large number of universities as Centers of Academic Excellence in Geospatial Science as a means of cultivating relationships and partnerships. a These include the following:

Alabama A&M University Roane State Community College
Arizona State University University of Alabama
Delta State University University of Maine
Fayetteville State University University of South Florida
George Mason University University of Texas, Dallas
Mississippi State University University of Utah
Northeastern University U.S. Air Force Academy
Ohio State University U.S. Military Academy
Pennsylvania State University

a See NGA Academic Research Program Symposium programs for 2005&ndash2015.

likely require models of environmental changes that could stress urban populations, such as sea-level rise and increases in summer temperatures. The Chinese water transfer questions would likely require a large-scale model of the hydrologic system in China to predict surface flow, subsurface flow, and abundance of water under different water diversion scenarios.

Many physical process models involve simulating fluids that are governed by Navier-Stokes and continuity equations, which represent conservation of momentum and mass, and they are solved numerically through finite or spectral discretization approaches. Accurate and representative observations of the natural system are critical both for creating the physical process models themselves and for setting the correct initial and boundary conditions that constrain the physical processes in a model-based investigation.

Physical process models can be large, highly nonlinear, and may couple together multiple processes over a wide range of space and time scales (e.g., Figure 4.1). Large, complex physical process models are often expensive to develop and run. In some cases, it may be sufficient to run reduced-order models, which use theoretical approaches to develop a simplified version of the full process model (Berkooz et al., 1993 Mignolet and Soize, 2008 Moore, 1981). Reduced-order models are intended to provide adequate approximations to high-fidelity models at significantly lower cost and time-to-solution.

A reduced-order model need not be faithful to the full spatiotemporal dynamics of the high-fidelity model it need only capture the essential structure of the simulated structure from the input parameters to the outputs of interest. The most popular approach to model reduction is to reduce the state dimension and the state equations using projection-based methods (Benner et al., 2015 Chinesta et al., 2016). Such methods are most successful for linear or weakly nonlinear models in low parameter dimensions. However, constructing efficient and capable reduced-order models that can handle complex nonlinear dynamical models and that are faithful over high-dimensional parameter space remains challenging.

Reduced-order models or emulators, which replace the computational process model with a response surface model, are also used to speed inverse or sensitivity analyses that require a lot of model runs. The emulator is trained from an ensemble of physical process model runs and creates a response surface mapping model inputs to model

output (Marzouk and Najm, 2009 O&rsquoHagan, 2006). Emulators can be used to predict model outcomes at untried input settings, allowing more thorough inverse or sensitivity analyses to be carried out.

FIGURE 4.1 Characteristic spatial and temporal scales of Earth system processes. SOURCE: NAC (1986).

State of the Art

Some physical processes, such as turbulence and heat and fluid flow, are relatively well understood and modeled due to a wealth of observation and theory. Other physical processes, such as subsurface dynamics, clouds, and ocean biogeochemistry, remain a challenge to model because of a lack of observations. This is a particular problem for highly detailed simulations of flows or other processes that vary at fine spatial or temporal scales (e.g., groundwater flows and precipitation). In addition, there are uncertainties regarding how any natural system responds to forcings and feedbacks.

Climate models are mature for the questions and scales they were designed to address and are a good example of the state of the art in physical process models. The models are complex&mdashinvolving multiple processes (see Figure 3.4) and multiple types and scales of observations&mdashare computationally demanding, and typically require a dedicated research center or a large community of researchers to develop, validate, and run. A global research community has emerged to simulate consistent past, present, and future climate-based scenarios of climate drivers, such as greenhouse gas concentrations, volcanic and manmade aerosols, and solar strength. Although the response of individual models to climate forcings varies, uncertainty is typically minimized by using multimodel ensembles that have run the same scenario (e.g., Figure 3.5). One of the greatest uncertainties in climate models concerns which emissions scenario will match future societal choices about energy, transportation, agriculture, and other factors (IPCC, 2013).

Substantial efforts have been made to downscale large-scale climate simulation results to the regional or local

scales that are more relevant to decision making (Kotamarthi et al., 2016). The goal of downscaling is to achieve the realistic high-frequency spatial and temporal variance of the real world that the coarser information lacks. There are three primary downscaling approaches:

  1. Simple, which adds trends in the coarse-scale data to existing higher-resolution observations (Giorgi and Mearns, 1991)
  2. Statistical, which relates large-scale features of the coarse data to local phenomena using regression methods, typology classification schemes, or variance generators that add realistic high-frequencies to the coarse data (Wilby et al., 2004) and
  3. Dynamical, which uses the coarse information as input to high-resolution computer models to dynamically simulate phenomena at much finer temporal and spatial scales.

Each downscaling approach has its own advantages and disadvantages. The less complex methods are simple, fast, and inexpensive to calculate, but they may produce inaccurate results, particularly for future scenarios that may differ from historic observations (Gutmann et al., 2014). Dynamical downscaling is more complex and expensive, but it has the potential to generate high-resolution information over a wider range of extrapolative scenarios. An example of a downscaling application is illustrated in Figure 4.2.

FIGURE 4.2 Mean annual precipitation downscaled (upper) from NCEP-NCAR Reanalysis (lower) helps capture the regional influences of high variability terrain, such as the Rocky Mountains. NOTE: BCSDm = bias corrected spatial disaggregation, applied at a monthly timestep, the statistical method used for downscaling. NCAR = National Center for Atmospheric Research NCEP = National Centers for Environmental Prediction. SOURCE: Gutmann et al. (2014).

The state of the art in physical process models, particularly weather and climate models, is summarized in Box 4.2.

BOX 4.2 State of the Art in Physical Process Models
  • Space and time scales: The scales simulated by the family of physical process models cover subatomic to galactic spatial scales and near-instantaneous to geologic time scales. Specific physical process models and methods are generally designed to simulate specific processes and features within a limited space and time scale (see Figure 4.1) and are rarely valid outside of those scales.
  • Fidelity: Physical process models are designed to balance the degree of fidelity to the amount of accessible computing power, the time available to carry out the simulation, and the complexity of the system being modeled. Models range from simple, low-fidelity models that can be run in seconds on a low-end tablet computer to very complex high-fidelity simulations that take months to run on the fastest supercomputers.
  • Accuracy and precision: In physical process models, accuracy refers how closely the simulation matches the average behavior of the observed system, whereas precision is a measure of the variance. A prediction that tomorrow&rsquos temperature will be less than 200ºF is surely accurate but imprecise, whereas a prediction that it will be 23.456789ºF is precise but likely to be inaccurate. The accuracy of many types of physical process models, such as numerical weather models, has been improving over time because of constant advancements in scientific knowledge and technology (Bauer et al., 2015). Both accuracy and precision are taken into account in measures of model prediction skill.
  • Predictions and scenarios: The predictability limit for an individual forecast of the highly nonlinear weather system is about 2 weeks. The predictability goal for climate models is currently a season to a decade (a) through better knowledge and observations of the ocean thermodynamics, which is the principal driver of the system at those scales, and (b) by forecasting outlooks of lower-precision features, such as wet or dry trends over broad areas, rather than precise temperatures or rainfall amounts at specific locations (Slingo and Palmer, 2011).
  • Uncertainty analysis support: In physical process models, uncertainty characterization is a method for conveying the uncertainties that are inherent when simulating continuous environments with discrete grid points and time steps, using imperfect observations and models. Because physical process models are increasingly being used in decision-making contexts, more emphasis is being placed on quantifying model uncertainty in a manner that makes the data more usable and actionable.
  • Validation and assessment support: Process models are validated though detailed comparisons of the accuracy and precision of the simulations relative to the observations of the physical system being simulated.
  • Computational requirements: State-of-the-art physical process models have matched their computational requirements to the rapid increase in computational capability over the past two decades. Petascale (1015 floating-point operations per second [FLOPS]) computers became firmly established in 2014, and exascale (1018 FLOPS) architectures are currently being designed.
  • Data requirements: Physical model data output ranges from insignificant to overwhelmingly large, even in installations with dedicated automated high-performance mass storage systems. Substantial model output is publicly available, and much of it uses standardized data and metadata formats to improve interoperability.
  • Difficulty to develop: Low-fidelity models of simple physical systems can be trivial to develop, whereas high-fidelity models of complex systems require years of effort by large teams of researchers.
  • Reuse: Physical process models are usually designed to be reused extensively.
  • Software/code availability: Although software and codes for physical process models developed in academia are often open source, most physical process models developed for commercial, classified, or emerging research applications require establishing contractual relationships to acquire or use.
  • Training support: Highly variable. While many physical process models include sufficient documentation on their use and application, others do not.

How to Make Useful for NGA

Rather than attempting to build in-house expertise in all relevant physical processes, NGA could leverage existing expertise in other organizations, either by becoming a user of model results or by becoming a partner in teams experienced in designing, carrying out, and analyzing physical process simulations. Vast amounts of process model output data are readily available, although much of it would require additional context to be useful for NGA applications, and much would have to be downscaled to the regional and local scales most relevant to NGA questions. In addition, some features of process model simulations may be reliable and useful in new applications. For example, framing the analysis in terms of risk (a function of vulnerability, exposure, and hazard) can be useful for examining the impact of physical processes on human systems. Finally, some process modeling teams develop benchmark scenarios (e.g., future emissions trajectories for climate models), and they may be willing to work with NGA to develop, run, and interpret scenarios tailored to NGA&rsquos specific interests. In all of these situations, NGA will need to invest time identifying domain experts and collaborating with them to design new simulations or scenarios, to select existing model output appropriate to the NGA question under consideration, to minimize uncertainties associated with downscaling, or to understand the strengths and weaknesses of the models in NGA scenarios.

The results of physical process models are increasingly being adapted for use in geospatial tools such as geographic information systems (GISs), which could facilitate their use in geospatial intelligence. The capabilities and sophistication of geospatial technologies as well as the large size of the GIS community have prompted many physical process modeling groups to ensure that their model results can be integrated into the rapidly proliferating suite of open and commercial GIS tools. Physical process model data can be made GIS compatible by using controlled vocabularies, standardized conventions for time and geolocation, and metadata. Once the georeferenced physical process model data are in GIS-ready formats, they can easily be mapped into human systems such as populations, cities, infrastructure, land forms, or social entities. This capability enables interactive data exploration, analysis, visualization, and distribution, all of which would improve delivery of usable model information to a broad range of users and uses (Wilhelmi et al., 2016). An example of a GIS analysis of climate model results is shown in Figure 4.3.

FIGURE 4.3 &ldquoBeat the Heat in Houston,&rdquo a Web-based tool that integrates temperature data with information about cooling centers and Centers for Disease Control and Prevention&ndashbased recommendations for at-risk populations. SOURCE: Courtesy of Jennifer Boehnert, National Center for Atmospheric Research.

NGA-Funded Research and Development Areas

Geospatial intelligence is based on analysis of the environmental context, relevant natural and human factors, and potential threats and hazards. 2 Because so many physical processes are relevant, and because the linkages between physical process models and downstream impacts are often highly nonlinear, NGA will be challenged to determine which physical process models are sufficiently important to develop and maintain for geospatial intelligence applications. Moreover, the important physical process models will depend on the intelligence application, and their importance can only be unambiguously assessed after a complete end-to-end system has been constructed. That said, some areas do lend themselves to NGA research and development, such as the following:

  • Precision real-time weather predictions to support applications such as (a) ground and aerial insertion, mission supply, and disaster relief missions in complex natural environments, or (b) source location, dispersion rate, and damage estimations from the release of hazardous materials in densely populated urban environments
  • Improved simulations to anticipate regional extreme events and environmental threats&mdashsuch as droughts, relentless heat (sustained high heat and humidity) events, disease vector precursors, or rapid sea-level rise&mdashthat could lead to large-scale social unrest, disruption, or migration
  • System emulators and reduced-order models of large complex physical systems&mdashsuch as climate, water, or chemical processes&mdashto allow rapid exploration of scenarios of interest to NGA
  • Methods to rapidly and inexpensively predict the importance of physical processes on intelligence applications
  • Design of robust frameworks, couplers, and application program interfaces to include physical process models in intelligence applications and
  • Refinement of physical models to facilitate their combination with social system models to gain additional understanding and potentially predictive capability of relevant coupled physical&ndashsocial system stresses and behaviors.

To help determine what physical process models are needed, NGA could survey its analysts and customers on gaps in data, information, knowledge, and capability. NGA could then build the in-house disciplinary knowledge and expertise to develop models or use available physical process data needed to fill the gaps. Partnering with other agencies that have relevant experience or mission responsibilities would likely speed the development of NGA&rsquos in-house capabilities.


Automatic 3D City Modeling Using a Digital Map and Panoramic Images from a Mobile Mapping System

Three-dimensional city models are becoming a valuable resource because of their close geospatial, geometrical, and visual relationship with the physical world. However, ground-oriented applications in virtual reality, 3D navigation, and civil engineering require a novel modeling approach, because the existing large-scale 3D city modeling methods do not provide rich visual information at ground level. This paper proposes a new framework for generating 3D city models that satisfy both the visual and the physical requirements for ground-oriented virtual reality applications. To ensure its usability, the framework must be cost-effective and allow for automated creation. To achieve these goals, we leverage a mobile mapping system that automatically gathers high-resolution images and supplements sensor information such as the position and direction of the captured images. To resolve problems stemming from sensor noise and occlusions, we develop a fusion technique to incorporate digital map data. This paper describes the major processes of the overall framework and the proposed techniques for each step and presents experimental results from a comparison with an existing 3D city model.

1. Introduction

Three-dimensional city models are widely used in applications in various fields. Such models represent either real or virtual cities. Virtual 3D city models are frequently used in movies or video games, where a geospatial context is not necessary. Real 3D city models can be used in virtual reality, navigation systems, or civil engineering, as they are closely related to our physical world. Google Earth [1] is a well-known 3D representation of real cities. It illustrates the entire earth using satellite/aerial images and maps superimposed on an ellipsoid, providing high-resolution 3D city models.

In the process of 3D city modeling, both the cost and the quality requirements must be considered. The cost can be estimated as the time and resource consumption of modeling the target area. The quality factor considers both visual quality and physical reliability. The visual quality is proportional to the degree of visual satisfaction, which affects the level of presence and reality. The physical reliability is the geospatial and geometrical similarity between the objects—in our case, mainly buildings—in the modeled and physical worlds. Generally, accomplishing a satisfactory level for both requirements is difficult.

Numerous techniques can be used for 3D city modeling. For instance, color and geometry data from LiDAR are mainly used if the application requires detailed building models for a small area. If the city model covers a large area and does not need detailed features, reconstruction from satellite/aerial images is more efficient [2]. This means that the effective approach can differ according to the target application using the 3D city model. The goal of our research is to propose a 3D city modeling method that can be applied in ground-oriented and interactive virtual reality applications, including driving simulators and 3D navigation systems, which require effective 3D city modeling methods for diverse areas.

2. Related Work

Existing 3D city modeling methods can be divided into manual modeling methods, BIM (Building Information Model) data-based methods, LiDAR data-based methods, and image-based methods.

The manual modeling method is highly dependent on the modeling experts. Older versions of Google Earth and Terra Vista employed this method in their modeling systems. Although current applications employ manual modeling because of its high quality, the method is also a high-cost, labor-intensive process. Hence, it is not efficient for urban environments that include numerous buildings. The BIM data-based method facilitates the use of building design data from the construction stage and is applied in city planning [3] and fire and rescue scenario [4]. However, this method is only efficient for applications in which the activity areas are strictly constrained, as gathering BIM data is problematic or even impossible given the large size of urban environments. Moreover, the BIM data should be postprocessed for the use of virtual reality applications. This is because the information in BIM does not contain the as-built 3D model so the properties for the visual variables should be mapped.

To address these problems, remote sensing techniques are being aggressively adopted and studies of LiDAR data-based methods and image-based methods are increasingly common [5]. LiDAR is a device that samples precise data from the surrounding environment using laser scanning technology. In several studies (e.g., [6, 7]), high-quality 3D city models have been reconstructed using LiDAR data. However, as noted in other work [6], ground-level scanning has a limited data gathering range, meaning that redundant data collection is unavoidable in the modeling of diverse areas, whereas airborne LiDAR [8] is limited in terms of its cost and color data collection methods.

Image-based methods include those based on stereo matching and inverse procedural modeling approaches. In earlier research [7, 9], a method based on stereo matching was used to recover 3D data from the feature point matching between a series of images. This approach usually requires numerous images to satisfy the accuracy and robustness requirements of feature point matching. Several recent inverse procedural modeling approaches [10–12] have modeled buildings using relatively few (mainly one) images. This can overcome the difficulties of data-collection in stereo matching. This approach employs a plausible assumption that is, that the shape of a building consists of a set of planes in three dimensions, to reconstruct individual 3D buildings without pixel-wise, 3D information. However, because image-based methods are not robust against instances of occlusion, user input or strong constraints are frequently necessary. This reduces their cost effectiveness and/or physical reliability.

In our research, the approach which preserves the cost-efficiency by using the existing image database while increasing physical reliability will be proposed. The image database is relatively easy to access than LiDAR database so cost on the data collection can be decreased. On the other hand the method based on the stereo matching requires numerous images on the large-scale modeling that decreases the universal applicability of method. Therefore inverse procedural modeling approach is preferred on our objective, while the physical reliability can be increased by combining accurate reference data [13].

3. Proposed Method

3.1. Mobile Mapping System and Digital Map

In this study, we propose a framework that uses a massive number of images gathered from a mobile mapping system (MMS). This addresses many problems in existing methods, which cannot simultaneously provide feasible levels of cost effectiveness, visual quality, or physical reliability. An MMS collects and processes data from sensor devices mounted on a vehicle. Services such as Google Street View [14], Naver Maps [15], and Baidu Maps [16] present information in the form of high-resolution panoramic images that include the geospatial position and direction of each image taken. The main focus of these services is to offer visual information about the surrounding environment at a given location. The advantages of data collected from MMS are as follows. (1) Nationwide or even worldwide coverage following the development of remote sensing technologies and map services. (2) Rich, visual, and omnidirectional information. (3) Sensor information that allows geospatial coordination with the physical world.

Using these advantages, we can model a diverse city area for ground-oriented interactive systems in a cost-effective way with the existing image database. Moreover, high visual quality at ground level can be provided by high-resolution panoramic images [17]. However, there are currently several disadvantages in the data collected from MMS. (1) Sensor data includes noise, which lowers its physical reliability. (2) The number of images in a given area is limited and is insufficient for stereo matching-based reconstruction. (3) Inclusion of an enormous amount of unnecessary visual information, including occlusions, cars, and pedestrians.

Noise is unavoidable in the sensing process. The amount of error this introduces differs according to the surrounding environment a ±5 m positional error and ±6° directional error have been reported in Google Street View data [18]. Such error levels can be problematic in the analysis required for 3D modeling. Moreover, the current service has an interval of

10 m between images, which lowers the possibility of successful reconstruction using stereo matching. Additionally, the uncontrolled collection environment results in a severe disadvantage for inverse procedural modeling. MMS data also requires an additional process to classify individual buildings, unlike the inverse procedural modeling approaches.

To address these problems with MMS data, we propose a method that incorporates 2D digital map data. Digital maps have accurate geospatial information about various features in the physical world. For instance, the 1 : 5000 digital maps applied in our framework have a horizontal accuracy of 1 m, which is five times better than that of the MMS position data. Therefore, by combining data, the problems of sensor errors can be overcome and the selective use of visual information is possible. On the other hand, the geometrical characteristic of the building is restricted to a quasi-Manhattan world model. The quasi-Manhattan world model is the assumption that structures consist of vertical and horizontal planes, and is an extension of the Manhattan world model that assumes structures consist of vertical and horizontal planes orthogonal to each other.

3.2. Process Overview

The proposed framework is illustrated in Figure 1. The input data are the aforementioned digital maps, which contain building footprint information and panoramic images from the MMS system with sensor data. The base 3D model is generated from the footprint information of the buildings the individual building regions are segmented and reprojected according to the combined GPS/INS (Inertial Navigation system) information. The reprojected region is further segmented and rectified to produce the texture image. Height estimation is possible by combining the building contour information from the texture image and the reprojected image. We can then obtain the textured 3D model by applying the height information to modify the height of the base 3D models.


Effortless data presentation

Navigate to any point along a transportation network and view your pipeline superimposed over terrain images sourced from satellite imagery, aerial photography or maps. Objects such as defects, casings and valves, and the results of PiMSlider analyses can also be displayed on the map.

PiMSGlobe's fast and accurate 3-dimensional reconstruction of pipeline and ground surface helps you to visualise the topography of an area and identify terrain features to help locate pipeline segments in the field.

PiMSGlobe 2-Dimensional Map with Pipeline

PiMSGlobe 3-Dimensional Map with Pipeline

PiMSGlobe Offshore Seabed Display

Further Information

Based on standards available from dynamic online maps and rich geospatial technology, PiMSGlobe connects to almost any source of mapping data and, combined with the details of the pipeline, presents an interactive experience to help you visualise and manage your transportation network.

PiMSGlobe is tightly integrated with all PiMSlider's core and Expert modules any object created in PiMSlider with known linear reference information, spatial position or just having GIS shape data can be shown on the map. The map layers can be stored locally within PiMSlider data structures or can be connected from different GIS systems and MapServers – the major map sharing interfaces are built-in.

Pipeline and centreline data can be read from PODS/APDM/ESRI GIS systems with seamless integration to PiMSlider.

Key Facts

  • Interactive 3-dimensional map with rich navigation controls
  • Intuitive user interface
  • Deep integration with PiMSlider’s core and Expert modules
  • Supplied with data management and interface connection tools
  • Supports Customer’s and public map sources
  • Offshore data compatible with DEM/DTM rendering
  • Precise visualisation of pipelines and the results of PiMSlider calculations
  • Supports linear referenced, spatial and shape objects
  • Generate high resolution reports including analysis results.

Download Datasheet

Please complete the form below to download the PiMSlider & Information System Datasheet.


Results

The emission spectrum of the ONH is characterized by several distinct emission peaks ( Figure 2 ). The strongest peak is at λSHG = λe/2, where λe is the excitation wavelength. This first peak (arrow) marks the emission of second harmonic generated light. SHG emission from biological tissues is indicative of the presence of collagen, because its fibrillar structure satisfies the phase-matching condition required for the emission of second harmonic generated signals. Secondary peaks with λ < λe indicate the presence of two photon excited fluorescence of various components of the ONH. In the sclera, the relative intensities of the TPEF peaks are markedly lower than those of the SHG peaks. This is likely due to a comparatively higher collagen density and/or a reduction in the fluorescent components in the sclera compared to the ONH.

Emission spectra of the sclera and a plastic section of human ONH tissue. The solid line shows the emissions generated from the ONH using an excitation frequency of 800nm. The arrow marks the second harmonic peak at 400 nm. Secondary peaks at higher wavelengths indicate the presence of two photon excited fluorescence from other ONH components.

Two-photon emissions from the sclera are indicated by the dashed line. Note the higher relative intensity of the SHG peak relative to the TPEF peaks. Secondary peaks match those of the ONH spectrum, suggesting a similar makeup.

As shown in Figure 3 , the immunostaining of the ONH tissue, utilizing a monoclonal antibody to type I collagen closely matched the results obtained from imaging the SHG signal. The same tissue features are visible in both the SHG channel ( Figure 3A ) and the immunostained channel ( Figure 3B ). The overlay ( Figure 3C ) confirms the high degree of correlation between the two signals, confirming the correlation between collagen and the emission of second harmonic generated signals.

Correlation of SHG and collagen immunostaining. Left panel: SHG image of human ONH tissue (A). The same area stained with an anti-collagen IgG antibody followed by secondary antibody (B). The overlay (C) confirms the colocalization. Right panel: SHG image of human ONH tissue (D), the same area stained with secondary antibody alone (E) and the resulting overlay (F). Bars: 100 μm.

To fully reconstruct the ONH in three dimensions, a block of tissue was embedded in plastic resin and serially sectioned. Each microtome section was scanned individually using a grid of 7 × 7 individual images. A correlation algorithm that is part of the Zeiss LSM software automatically corrected minor misalignments during the automated concatenation process. The resulting image ( Figure 4A ) represents the 0.91 micron / pixel resolution while at the same time providing a field of view large enough to encompass the entire width of the ONH. This image shows a single 2 micron section from the laminar area. The scleral canal wall, with its high collagen content is visible as a ring-like structure surrounding the laminar meshwork. Due to the high resolution of these images, individual collagen beams are clearly distinguishable, as seen in Figure (4B) which shows a magnified view of the area around the central artery, which is marked by an arrow in Figure 4A . These beams insert peripherally into the scleral canal wall and centrally into the central artery. The inferior temporal region appeared to be deficient in collagen beams (asterisk) in most slides, suggesting a low collagen density in this region.

Orthogonal cross-section of the lamina cribrosa obtained using SHG imaging. (A) A single plane image mosaic consisting of 49 individual image tiles scanned in sequence using an excitation wavelength of 800 nm and represents a physical section with a thickness of 2 μm. Native resolution for this image is 3584 by 3584 pixels. The arrow marks the location of the central artery. The laminar meshwork is clearly visible. (B) Magnified view of the area around the central artery. Owing to the high resolution, details such as the insertion of individual collagen beams into the central artery are clearly visible.

The slide shown in Figure 4 demonstrates a variation in local collagen content but represents only a single 2 micron plane of the full dataset. To visualize the overall collagen distribution throughout the whole scanned volume, the image stack was thresholded and collapsed onto a single plane by performing an 𠇊verage z” projection. The resulting average gray values were mapped to a full-color lookup table to better visualize the variability in density within the dataset. Figure 5 shows the resulting image which represents a full-resolution density map of the collagen distribution throughout the ONH. In this image, cooler colors correspond to lower average collagen content as indicated by the scaling bar. Black pixels indicate the absence of collagen throughout the entire stack in that particular location. Cooler colors represent a low collagen density, whereas hot colors indicate a high collagen content throughout the stack. In this image, the scleral ring and the central artery show the highest collagen content as they are present throughout the entire stack. The laminar meshwork, which is barely visible in the prelaminar region, results in a markedly lower average collagen content. Within the lamina, higher collagen densities are found in the nasal and superior regions, whereas the inferior-temporal region is marked by considerably lower average collagen content.

Collagen density map of the ONH. To generate this image, an average z projection was performed on the image stack which was thresholded for collagen. A density of 100% indicates the presence of collagen in every plane of the stack, whereas a density of 0% corresponds to a complete lack of collagen in that particular location.

This can be better appreciated by studying a three-dimensional rendering of the entire dataset. Using Amira, the whole stack was reconstructed in 3-D as a Voltex object. By using OrthoSlice modules to scroll through the dataset along all three axes, the LC can be viewed from numerous angles. Rotating the reconstructed stack to the appropriate position allows for a longitudinal cross-section view that shows the expansion of the scleral canal and the posterior bowing of the LC ( Figure 7D ).

ONH area and collagen density analysis. (A) Canal area, collagen area and total pore area throughout the ONH. The amount of available pore space is reduced inside the LC. (B) Collagen content per anatomical region plotted as a function of depth. Each plane was analyzed individually. The density increases significantly inside the LC.

Quantifying the dimensions of the ONH was achieved with MetaMorph’s measuring tools. The diameter of the ellipsis-shaped scleral canal opening at the anterior (retinal) side was determined. For this eye, the ellipsis measured 1845 μm along the major and 1765 μm along the minor axis. The canal area expanded from 2.41 mm 2 at the most anterior aspect at Bruch’s membrane, to 5.01 mm 2 at the posterior aspect of the dataset.

To obtain a quantitative analysis of collagen content and distribution throughout the entire ONH, 294 planes comprising the full dataset were analyzed individually for collagen density and scleral canal area. As the retinal ganglion cell axons pass through the pores formed by the collagen beams, the amount of space available to the axons is directly related to pore size. Figure 7A shows an analysis of the prelaminar and laminar area. Here, the sclera canal widened posteriorly. Further, larger pore sizes dominated in the anterior portion of the lamina, but total pore area was restricted within the LC. Since the laminar meshwork was bowed in a posterior direction, the cross-sections of individual planes on the anterior side of the structure only contained collagen near the sclera canal wall. This coincided with an overall lower collagen content which increased as one moved down the canal in a posterior direction. The maximum collagen content was found in areas where the entire cross-section of the sclera canal was filled with collagen beams.

To better correlate the collagen distribution with clinical observations of the ONH, the dataset was divided into four anatomical regions as outlined in Figure 1 . Collagen density values were obtained by performing a plane-by-plane analysis of the entire LC. Analyzing the collagen content by anatomical region ( Figure 7B ) reveals a steep increase in total collagen content about 140 μm into the stack, marking the beginning of the lamina. Within the lamina, the collagen content rises in all four sectors. However, the inferior quadrant showed a significantly lower average collagen content of 19.0 % ± 2.5 % throughout the laminar area compared to the other three quadrants which had collagen densities of 25.2 % ± 2.3 %. This observation was consistent with the analyses presented in Figures ​ Figures4 4 and ​ and5 5 as noted.

Based on data from Figure 7 , a plateau in relative collagen content in the laminar region is present within 60 planes or 120 μm starting approximately 400 μm from the surface of the stack. These planes were collapsed into a single plane by performing an average z projection after thresholding. The resulting image was then analyzed for relative collagen content by performing a radial analysis from the center of the image to the scleral canal wall through 360°. The scan was started at the 12 o’ clock (superior) position and moved clockwise in single degree increments. For each line, the average collagen content was calculated. The resulting plots are shown in Figure 8 , demonstrating the lack of uniformity in the angular distribution of collagen density in the laminar region.

Morphometric area analysis for collagen using line scans. The scan was performed twice: (A) shows the collagen content of the peripheral LC excluding the central portion containing the artery as shown in Figure 1 . (B) shows the collagen content of the entire LC. A running average over 5 data points was performed to smooth out the graphs while retaining information about relative positions. The central arterial walls are visible as twin local maxima in (B) at 260 and 285 degrees.

This scan was performed twice, with the first scan omitting the central portion of the lamina to exclude the central artery ( Figure 8A ). The second scan includes data from both the central and peripheral LC ( Figure 8B ). The high degree of variability in collagen content seen in this graph suggests a heterogeneous distribution of collagen in the LC. Collagen content varied significantly, even within each anatomical region.


Real Time Soft Shadow Rendering

Recent advanced GPU systems have made it evident to add shadows in movies, games and applications.

What is shadow? Why it is necessary?

Shadow is a dark area caused if a body comes between light source and surface area. Shadow is needed to bring realism in image. Also shadow helps to understand object placement in 3D scene.

Previously hard shadows were used because of hardware constraints. Hard shadows depend only on one binary notion and that is whether the point is in shadow or not. Hard shadows are relatively easier to compute but don’t generate outstanding results. There is another problem, to compute hard shadows point light source is used, which doesn’t exist in real life.

Soft shadows are introduced, which generate more realistic images to the viewer. Instead of a point light source an area light source is used. Object placement in 3D space is more accurate with soft shadows. Shadow is divided into umbra (dark region on the surface) and penumbra (relatively less darker region) and combining these soft shadows are generated.

Several techniques to render shadow are introduced, like

  • Raytracing
  • Radiosity
  • Phong Shading
  • Variance Soft Shadow Mapping
  • Convolution Shadow Mapping
  • Percentage Closer Shadow Mapping
  • Layered Variance Shadow Maps
  • Image Based Methods etc.

Not all this methods produce soft shadows in real time. Raytracing, radiosity render most realistic images with shadows but asks for a very complex computation, and result is not generated in real time. Thus we not look into those techniques. Due to the advancement in recent GPU systems soft shadow rendering in real time has become more common. Thus new techniques are being introduced.

Rendering soft shadows generated from extended light source in real time is an ongoing research topic. Till today no single method is been introduced to render shadows, thus researches are going on. We look onto develop our own real time shadow rendering method or modify existing ones.

Some Basic Tools for shadow rendering

Shadow rendering is a delicate thing to do and there are a vast array of factors which play a part in drawing shadows and idea about these factors are the priors to draw shadows.

Light sources also play a vital role in rendering shadows. In CG(Computer Graphics) point light source is used to draw hard shadows. In real life point light sources don’t exist. Hard shadows also give us a not so realistic view about the scene.

Then comes, determining shadows with multiple light sources. This can be achieved easily if we calculate drawing a shadow for single light source.

Umbra and Penumbra Region

In rendering shadows, there are a couple of complicated factors. One such is, knowing the umbra and penumbra area. 3

Umbra region is where the light source is totally occluded by the object and a hard shadow in drawn onto the surface by the size of that area. Penumbra is the area where the light source is not occluded and a softer shadow is drawn by the area. Combining these two a complete scene with soft shadows is accomplished.

In shadows the distance between objects, lights source and surface also plays a vital role. Whenever a change in distance between the occluder and the light source occurs the shadow will change its shape and make a realistic view. If the distance between the light source and object is high then a smaller shadow regions should be drawn, if the distance between the lights source and the object is low there should be a larger shadow region.

We are into developing an application which renders soft shadows in real time. We won’t be looking at the techniques which ask for high calculation time also we will be overlooking the shadow quality.

Nowadays high end GPU systems are available which calculates shadows fast and thus determining real time requires some calculations. We will be looking at the FPS(frames per second) to get the actual idea about rendering time.

Survey of Shadow Algorithms

The first complete survey on soft shadow algorithms were done by Woo et al [90]. According to them complexity for rendering shadows depends on three basic things, which are Storage Usage, Pre-processing Runtime complexity and complexity during the actual rendering is done.

Hard shadows are easy to compute. Only one thing is to be kept in mind while rendering hard shadow and that is to know whether the points are in shadow region or not.

To render soft shadows there are a couple of techniques are discussed. Frame buffer algorithm, distributed ray tracing, cone tracing, area subdivision, radiosity are some of the techniques used to render soft shadows, though all of them are not into rendering shadows in real time.

GPU systems have been highly developed and now focus onto real time rendering is increased. Image quality is also improved as antialiasing, motion blur, shadow casting are become general. 4

There are several things to be kept in mind whenever it comes to rendering shadows, like position and size of the occluder, geometry of the occluder, geometry of the receiver etc.

Recent growth in computer graphics technology has made it a reality to render 3D graphics in real time. Thus, previous rendering techniques are to be updated for better results.

Hasenfratz et al [03] has done a survey on shadow algorithms in real time. We are to study these algorithms.

In general shadows are of two types, hard shadows and soft shadows. Hard shadows are comparatively easier to compute. In hard shadows only one thing needs to be in focus, whether the point is in shadow or not. Hard shadows are generated by point light source. In real life, point light source doesn’t exist. There are algorithms to implement hard shadows.

Soft shadows are more complicated to implement. It gives more realistic view than hard shadows but requires high complexity.

Softness of shadows depends on the distance between the source, occluder and receiver. It is not easy to compute soft shadows. It calls for complex techniques. There are some issues to be dealt, when it comes to render soft shadows. Practically single light source doesn’t exist. Shadows produced by several light sources are to be calculated. This can be handled easily if shadows from single light source are obtained. Then added together and final shadow is achieved.

There might be more than one occluder. To get the shadow, the shadow area for each occluder is added to get the final shadow region. It is not easy to calculate the relation between partial visibility functions of different occluder, thus approximate values are used.

Computing shadows for extended light source is complex to compute. In real time, shadow algorithms compute visibility information for one point and then using visibility information behavior for the extended light source is calculated. Using complete extended light source for visibility computation is algorithmically too complex and can’t be used in real time. Dividing the light source into smaller light sources removes the complexity to some extent.

Penumbra region generation is a process to generate soft shadows from hard and done in real-time. Algorithms are there to compute penumbra region, as

1. Extend the umbra region outwards, computing outer penumbra region.

2. Shrink the umbra region inwards, compute inner penumbra region.

We focus onto render soft shadows in real time. So, we don’t look into techniques like ray tracing, radiosity, Monte-Carlo ray tracing or photon mip mapping as these are complex to implement and don’t render shadows in real time.

Shadows from point light source, shadow mapping and shadow volume algorithm will be the point of focus.

To compute shadows one thing is a must, that is to identify the parts of the scene which are hidden from the light source this is same as visible surface determination. 5

There comes the shadow map, Z-buffer.

At first computing the view of the scene is done, from the point of view of the light source. Then Z values are achieved and kept in the Z buffer. Z values hold the geometric position of each pixel, if changes do not occur the values are same or changed accordingly.

Some Techniques to draw soft Shadows

Hasenfratz et al [03] has done a survey on shadow algorithms in real time. We are to study these algorithms.

In general shadows are of two types, hard shadows and soft shadows. Hard shadows are comparatively easier to compute. In hard shadows only one thing needs to be in focus, whether the point is in shadow or not. Hard shadows are generated by point light source. In real life, point light source doesn’t exist. There are algorithms to implement hard shadows.

Soft shadows are more complicated to implement. It gives more realistic view than hard shadows but requires high complexity.

Softness of shadows depends on the distance between the source, occluder and receiver. It is not easy to compute soft shadows. It calls for complex techniques. There are some issues to be dealt, when it comes to render soft shadows. Practically single light source doesn’t exist. Shadows produced by several light sources are to be calculated. This can be handled easily if shadows from single light source are obtained. Then added together and final shadow is achieved.

There might be more than one occluder. To get the shadow, the shadow area for each occluder is added to get the final shadow region. It is not easy to calculate the relation between partial visibility functions of different occluder, thus approximate values are used. 6

Computing shadows for extended light source is complex to compute. In real time, shadow algorithms compute visibility information for one point and then using visibility information behavior for the extended light source is calculated. Using complete extended light source for visibility computation is algorithmically too complex and can’t be used in real time. Dividing the light source into smaller light sources removes the complexity to some extent.

Penumbra region generation is a process to generate soft shadows from hard and done in real-time. Algorithms are there to compute penumbra region, as

3. Extend the umbra region outwards, computing outer penumbra region.

4. Shrink the umbra region inwards, compute inner penumbra region.

We focus onto render soft shadows in real time. So, we don’t look into techniques like ray tracing, radiosity, Monte-Carlo ray tracing or photon mip mapping as these are complex to implement and don’t render shadows in real time.

Shadows from point light source, shadow mapping and shadow volume algorithm will be the point of focus.

To compute shadows one thing is a must, that is to identify the parts of the scene which are hidden from the light source this is same as visible surface determination.

There comes the shadow map, Z-buffer.

At first computing the view of the scene is done, from the point of view of the light source. Then Z values are achieved and kept in the Z buffer. Z values hold the geometric position of each pixel, if changes do not occur the values are same or changed accordingly.

Some techniques to render soft shadows in real time,

Image based approaches

  • Combine several shadow textures taken from point samples on the extended light source
  • Using layered attenuation map, instead of depth images
  • Using a couple of numbers shadows maps
  • Convolving a shadow map with an image of the light source

Object based approaches

  • Combine several shadow volumes taken from point samples on the light source
  • Shadow volume is broaden
  • Penumbra volume for each edge of the shadow silhouette
  • After all is done it must be put into mind that rendering has to be done in real-time.

Geometry of the occluder, physically exact shadow, methods computing only inner or outer penumbra should use approximate value, light source, number of subdivision of light source, drawing the silhouette of objects influences rendering shadows in real time.

Image based techniques generate high quality soft shadows. This technique also works with area light source. The complexity of image based process depends upon the image size and light source rather than on scene complexity. This technique does not rely on the geometry, so result is achieved faster.

In the paper two techniques are discussed

Soft shadows from area light source bring realism on images generated by computers. Determining the penumbra still remains a problem to be dealt with. To determine the penumbra computing visibility between every surface point and every light source is needed.

William [78] did some works on computing hard shadows from point source. He has done visibility calculation in image space. This method does not give good results due to under sampling and aliasing artifacts. Also point light source doesn’t exist in real life.

Agarwala et al [2000] has generated soft shadows merging two techniques. Both of the techniques compute shadows in image space. Time and memory for this technique depend on image size and number of lights. Thus, geometric complexity is removed.

Layered Attenuation Map

This approach has interactive rendering rates, but sampling flexibility is limited. (LDI)Layered depth image is computed first. The LDI keeps track of the depth information and layered attenuation maps. This helps to generate projective soft shadow textures. Proper attenuation is used for normal rendering but shadows. 8

Coherence Based Raytracing

This approach is good to generated high quality images. Not an interactive method. Shadow maps are computed first from few points of the light source, often boundary vertices. Visible portion of the light source does not very for surface point close to one another.

Object based methods like distributed raytracing, radiosity, discontinuity meshing or back projection generate soft shadows. These techniques provide good result regarding shadows but are not fast.

Soler and Sillion [98] used convolution on blocker images for generating approximate soft shadows. This technique avoids sampling artifacts but cluster geometry in object space is there. Clusters cannot shadow themselves. Plants or trees require a large number of clusters, thus complexity is increased.

Layered attenuation map method give good results when it comes to rapid previewing, has fast pre-computation phase and interactive display phase, independent of scene geometry complexity. Final images for applications such as pre-rendered animation ask for anti-aliased, artifacts free shadows. To do so coherence based ray-tracing algorithm is introduced.

Soft shadow mapping has become a very captivating part regarding computer applications and games. Shadow mapping has to be very efficient if to be done in real time. Previously image based shadow mapping were used to map shadows. This technique is used to map hard shadows. To map soft shadows many method have been introduced. VSM (Variance Soft Shadow Mapping), PCSS (Percentage Closer Soft Shadows) are a few of the methods to map soft shadows. When the light source is fairly small this methods afford to generate better results, but with the extension of light source this methods don’t perform correctly. Yang et al[2010] has tried to improve PCSS method to achieve enhanced performance on the subject of mapping soft shadows. Variance Soft Shadow mapping is a widely used method for mapping soft shadows requires pre-filtering and also needs a lesser amount of memory space to keep textures. But VSM doesn’t always map soft shadows with accuracy. It might generate error while computing average blocker depth values in consequence brute force is applied. VSM method is also not so superior while dealing with soft shadows. Some pixels may be inaccurately lit leading soft shadow not to be proper. This problem is called non-planarity problem.

Yang et al [2010] aimed to design a formula for estimating average blocker depth, based on VSM theory. They also intend to develop practical filter kernel subdivision scheme to handle non-planarity problem. There are two types of subdivision, uniform way and adaptive way. Either can be chosen for implementation. Existing shadow algorithms are used to map shadows, henceforth Yang et al [2010] has done so. Previously classical discontinuity messing was used to generate shadows. in recent works researcher prefer shadow mapping techniques. There is a method named back-projection which is very complex to be implemented in real time and also yields incorrect occlusion or light leaking. Hard shadow mapping has some problems of its own. Pre-filtering and edge anti-aliasing are some of those. Pre-filtering cannot be executed with a 9

proper shadow test. Some pre-filtering methods are implemented to run shadow tests. VSM supports pre-filtering very well. There is another method named ESM (Exponential Shadow Maps), which uses exponential functions to run shadow test operations. These functions are dependent on PCF (Percentage Closer Filtering) for non-planarity, which works fine with soft shadows. But when mapping soft shadows, PCF is very expensive process. Soft shadow mapping with pre-filtering might show non-planarity problems.

Variance soft shadow mapping is based on Chebyshev’s inequality and results in close approximation, not accurate always. Most talked about problem in PCSS and VSM is how to calculate average blocker depth with efficiency? There is also the non-planarity problem. Yang et al [2010] has proposed some ideas to solve these problems. They divided the main kernel into subdivision kernels of equal size. Then they chose PCF sampling over assumption based lit kernel. PCF removes non-planarity problem to some extent if the sampling size is small. Two types of subdivision are used in shadow mapping, uniform and adaptive. If the number of sub-kernel is large (>= 64), adaptive subdivision gives better results. Adaptive kernel might give a better result. Yang et al [2010] proposed to merge both of these subdivision schemes and achieved a better result. Yang et al [2010] has used SAT (Summed Area Tables) to pre-filter. But SAT has numerical precision loss for large filter kernel. Hence 32-bit integer is used. Z avg. and d is calculated and there difference is saved, then comparing with some given value, outcomes are accomplished.

Yang et al [2010] has used all the present technologies of mapping soft shadows and showed us the difference between all of those. They used ray tracing, VSSM, PCSS, back-projection to map soft shadows and different output is achieved and presented.

Shadow computing is a problem in graphics. With extended light source the problem grows. Hence convolution technique is introduced which computes shadows fast and accurately also.

Soft shadows are achieved through variation of illumination on surface. If the light source is occluded by other objects, then visibility is determined by penumbra region.

The calculation required to render soft shadows is complicated to implement. Then visibility determination should also be handled. Many methods have been introduced to render hard shadows from point light source. Computing shadows from area light source is still a problem in computer graphics.

Soler and Sillion[1998] has done soft shadow rendering with artifacts free image in an efficient way.

They calculated shadow map. Textures are created from images with light source and occluders. Then these textures are used to render soft shadows. The convolution method is run offscreen buffers. Exact images are needed for parallel objects otherwise approximation is used. Overall approximation is handled by clusters. Adding shadow maps from sub-clusters are added and result is achieved. 10

Single rendered image is used, so shadows are rendered fast and interactive rate can be accomplished.

All shadow algorithms do not work in real time. Ray tracing algorithm cast shadows from a point in the surface towards light source. Ray tracing creates realistic images with shadows, but an expensive process.

Depth images are used to store information about the image with respect to light source and later on shadows are added based on the information. This process cannot handle extended light source, thus penumbra region might be erroneous. Aliasing problem is not handled in a proper way.

Solar and Sillion has stated about using a different data structure to handle the information required to render shadows. Shadow volumes can be used, using point light source and occluders. Point light source is not very useful when it comes to render soft shadows.

To keep record about the visibility information due to extended light source discontinuity mesh can be used. Both the structures are complicated for computation.

Shadow generating itself is a complicated process and to make it interactive extra cautions are needed to be taken. Till the research of Woo et al [90] no single method has been put forward regarding shadow rendering.

Shadow construction can be done in the back while the scene is rendered. This requires hardware acceleration. Due to recent advanced GPU systems this can be done.

Interactive pre-calculating of soft shadows were first done by Heckbert and Herf[98]. They created a number of shadow images for points on the light source and combined later. This process asks for high end graphics cards and hard shadows are achieved instead of soft.

We are focusing on real time soft shadow rendering. Avoiding all the techniques which needs high calculation, more memory size and high end GPU systems were our main concern. We have implemented a very simple method to draw soft shadows. We used simple geometry to know about the scene and drawn objects.

We have drawn a triangle object and a surface. Then we calculated the normal for the surfaces using basic geometry.

When two lines are perpendicular to each other, then they are called normal. The surface normal, a tangent is drawn over the curved surface and the line or vector perpendicular to the tangent plane is called the surface normal.

For a plane given by the equation ax + by + cz + d = 0, the vector (a, b, c) is a normal. For a plane given by the equation, r (α, β) = a + αb + βc.

Doing so we get the surface position were our shadow would be drawn.

We then use matrix multiplication to get the actual points for drawing the shadow onto the surface. Matrix multiplication takes two matrices and multiplies their elements and gives us a new matrix with the resultants. The size of the two matrices has to be same, or else multiplication will not take place. The resultant matrix is also the same size as the two matrices.

We also use another basic geometry equation. Equation of a straight line is,

y = mx + c. at first we calculate the tangent line between all the vertex point of the object and light source. Then using y = mx + c equation we derive the project points of the shadow onto the surface.

Figure 3(a) shows a hard shadow drawn using our method.

To draw the soft shadow, we just draw the hard shadow first and then we made it softer. a) shows a hard shadow drawn using our method.

To draw the soft shadow, we just draw the hard shadow first and then we made it softer.

Advantages
From the very beginning it is stated that we are to design soft shadow in real time. Our algorithm is very basic and it is at liberty of other high calculation based techniques.
New advanced GPU systems come with a lot of functionality. These GPUs support all the basic functionalities regarding rendering shadow with different techniques. Programmable GPUs have also emerged and supports single sample soft shadows, penumbra maps, smoothies and soft shadow volumes.
We don’t need to calculate the silhouette of the object. Calculating the silhouette asks for high pre-computation. Silhouette calculation cannot be done in GPUs and done in main processing unit. So, pre computation is quiet lengthy.
We also don’t need a very high memory space. As we don’t need to keep track of the Z values and also other buffers like stencil buffer is not needed. So, large memory space is not required.

Results
We draw the tables with fps values for hard shadows and soft shadows.

We were able to render our soft shadow with a lower GPU system and we had a better result than expected. We discussed about some existing real time soft shadow algorithms. We designed a very simple but efficient soft shadow algorithm based on geometry.