4.5.0 Change Notes

Table of contents:

Display

Seafloor terrain

The iTwin viewer supports visualizing the Earth's landmasses in 3d using Cesium World Terrain, providing real-world context for infrastructure built on land. For undersea infrastructure, the context of the seafloor terrain can be just as important. Now, you can make use of Cesium World Bathymetry to get a glimpse of the world under the waves.

To enable seafloor terrain, create a TerrainSettings specifying CesiumTerrainAssetId.Bathymetry as the dataSource. For example:

  function enableBathymetry(viewport: Viewport): void {
    viewport.changeBackgroundMapProps({
      terrainSettings: {
        dataSource: CesiumTerrainAssetId.Bathymetry,
      }
    });
  }

You can alternatively specify the Id of any global Cesium ION asset to which you have access. Either way, make sure you add the asset to your ION account first.

The new TerrainSettings.dataSource property can be used by custom TerrainProviders as well, to select from different sources of terrain supplied by the same provider.

Below, the islands of Cape Verde are visualized with bathymetric terrain, slightly exaggerated in height for emphasis:

Cape Verde terrain

Simplified TileTreeReference

iTwin.js provides two ways to add graphical content to a Viewport's scene:

In some cases, you'd just like to add some relatively simple graphics to the viewport's scene using a TiledGraphicsProvider. To simplify that process, TileTreeReference.createFromRenderGraphic accepts a graphic created using a GraphicBuilder and produces a tile tree from it, which you can then associate with a TiledGraphicsProvider. Here's an example:


/** Add a TiledGraphicsProvider to draw a sphere into the specified viewport. */
export function addTiledGraphics(viewport: Viewport): void {
  // Create a scene graphic with a 1cm chord tolerance.
  const builder = IModelApp.renderSystem.createGraphic({
    type: GraphicType.Scene,
    computeChordTolerance: () => 0.01,
  });

  // Produce the sphere graphic.
  builder.addSolidPrimitive(Sphere.createCenterRadius(new Point3d(0, 0, 0), 20));
  const graphic = builder.finish();

  // Create a TileTreeReference to draw the sphere.
  const treeRef = TileTreeReference.createFromRenderGraphic({
    graphic,
    modelId: viewport.iModel.transientIds.getNext(),
    iModel: viewport.iModel,
  });

  // Register a provider to draw the sphere as part of the viewport's scene.
  viewport.addTiledGraphicsProvider({
    forEachTileTreeRef: (_vp, func) => func(treeRef),
  });
}

You can use this new API to easily create a tile tree for dynamically classifying a reality model.

Dynamic classifiers

Classification involves using geometry from a design model to "classify" the geometry of a reality model. Wherever a region of the reality model intersects with geometry belonging to an element in the design model, that region is treated as if it belongs to the element. For example, mousing over a classified region displays a tooltip describing the classifying element, and selecting the region selects the element. The color of the classified region can also be altered to match the classifying element's color. A classic use case is a photogrammetry model of a city, classified using polygons representing the footprints of buildings within the city, such that the user can interact with individual buildings within the reality model.

The usefulness of this technique is limited by the constraint that the classifier geometry must originate from a persistent GeometricModel stored in an iModel. Imagine a scenario in which you have a reality model representing a building, and access to live data streamed from heat sensors placed in rooms within the building. You might wish to classify the rooms based on their current temperatures - but because the data is constantly changing, it wouldn't make sense to store it in a persistent model.

Now, that constraint has been lifted. You can define your geometry at run-time and apply it to a reality model as a DynamicSpatialClassifier by setting the activeClassifier property of ContextRealityModelState.classifiers. The geometry is supplied by a TileTreeReference, which can easily be created using the new TileTreeReference.createFromRenderGraphic API.

Here's a simple example that classifies spherical regions of a reality model:


/** A spatial region described by a bounding sphere, used to classify a reality model. */
interface ClassifiedRegion {
  /** The center point of the bounding sphere. */
  center: Point3d;
  /** The radius of the bounding sphere. */
  radius: number;
  /** The name of the region, to serve as a tooltip when the user hovers over the classified region of the reality model. */
  name: string;
  /** The color in which to draw the classified region of the reality model. */
  color: ColorDef;
}

/** Classify spherical regions of a reality model, either by planar projection (`classifyByVolume=false`) or by bounding volume (`classifyByVolume=true`). */
export function classifyRealityModel(model: ContextRealityModelState, regions: ClassifiedRegion[], classifyByVolume: boolean): void {
  const modelId = model.iModel.transientIds.getNext();

  // Create a GraphicBuilder to define the classifier geometry.
  const builder = IModelApp.renderSystem.createGraphic({
    type: GraphicType.Scene,
    computeChordTolerance: () => 0.01,
    pickable: {
      modelId,
      id: modelId,
      isVolumeClassifier: classifyByVolume,
    },
  });

  const regionIdsAndNames: Array<{ name: string, id: Id64String }> = [];

  for (const region of regions) {
    // Assign a unique Id to each region, so we can identify them when the user interacts with them in a viewport.
    const regionId = model.iModel.transientIds.getNext();
    regionIdsAndNames.push({ id: regionId, name: region.name });

    // Add a sphere representing the region.
    builder.setSymbology(region.color, region.color, 1);
    builder.activatePickableId(regionId);
    builder.addSolidPrimitive(Sphere.createCenterRadius(region.center, region.radius));
  }

  // Create a tile tree reference to provide the graphics at display time.
  const tileTreeReference = TileTreeReference.createFromRenderGraphic({
    graphic: builder.finish(),
    modelId,
    iModel: model.iModel,
    getToolTip: async (hit) => Promise.resolve(regionIdsAndNames.find((x) => x.id === hit.sourceId)?.name),
  });

  // Direct the reality model to use our tile tree reference for classification.
  model.classifiers.activeClassifier = {
    tileTreeReference,
    name: "Regions",
    flags: new SpatialClassifierFlags(undefined, undefined, classifyByVolume),
  };
}

Electron 29 support

In addition to already supported Electron versions, iTwin.js now supports Electron 29.

Editor

Changes to @beta BasicManipulationCommandIpc class:

Changes to @beta CreateElementWithDynamicsTool class:

Added EditTools.registerProjectLocationTools method. These tools are no longer automatically registered by EditTools.initialize. Applications that wish to include these tools and also register the required BasicManipulationCommand with EditCommandAdmin should call this new method.

Removal of several @alpha test tools for creating Generic:PhysicalObject class elements that didn't belong in the core package.

Lock Control

Changes to @beta LockControl class to make releaseAllLocks @internal. Should only be called internally after pushing or abandoning all changes.

Presentation

RPC interface version bump

The presentation backend now sends more properties-related information to the frontend than it did in previous version, and the frontend relies on that information for formatting and rendering the properties. As a result, the version of PresentationRpcInterface has been bumped from 4.0.0 to 4.1.0.

Custom renderer and editor support for array items

Support for custom renderers and editors has been added for array items.

A renderer / editor may be assigned to items of specific array by creating a ContentModifier for a class that has the property and adding a property override for the array property with [*] suffix. Example:

{
  "ruleType": "ContentModifier",
  "class": { "schemaName": "MySchemaName", "className": "MyClassName" },
  "propertyOverrides": [
    {
      "name": "MyArrayProperty[*]",
      "renderer": {
        "rendererName": "test-renderer"
      },
      "editor": {
        "editorName": "test-editor"
      }
    }
  ]
}

Support for property overrides on ECStruct member properties

Support for property overrides has been added for struct member properties.

The overrides may be assigned to members of specific ECStruct class by creating a ContentModifier for the struct class and adding property overrides for the members. Example:

{
  "ruleType": "ContentModifier",
  "class": { "schemaName": "MySchemaName", "className": "MyStructClassName" },
  "propertyOverrides": [
    {
      "name": "StructMemberProperty",
      "renderer": {
        "rendererName": "test-renderer"
      }
    }
  ]
}

Deprecation of async array results in favor of async iterators

PresentationManager contains a number of methods to retrieve sets of results like nodes, content, etc. All of these methods have been deprecated in favor of new ones that return an async iterator instead of an array:

  • Use getContentIterator instead of getContent and getContentAndSize.
  • Use getDisplayLabelDefinitionsIterator instead of getDisplayLabelDefinitions.
  • Use getDistinctValuesIterator instead of getPagedDistinctValues.
  • Use getNodesIterator instead of getNodes and getNodesAndCount.

All of the above methods, including the deprecated ones, received ability to load large sets of results concurrently. Previously, when requesting a large set (page size > 1000), the result was created by sending a number of requests sequentially by requesting the first page, then the second, and so on, until the whole requested set was retrieved. Now, we send a request for the first page to determine total number of items and backend's page size limit, together with the first page of results, and then other requests are made all at once. At attribute maxParallelRequests was added to these methods to control how many parallel requests should be sent at a time.

While performance-wise deprecated methods should be in line with the newly added async iterator ones, the latter have two advantages:

  1. Caller can start iterating over results as soon as we receive the first page, instead of having to wait for the whole set.
  2. The iterator version stops sending requests as soon as the caller stops iterating over the async iterator.

Example: Showing display labels for a large set of elements in a React component.

The deprecated approach would be to use PresentationManager.getDisplayLabelDefinitions to retrieve labels of all elements. Because the API has to load all the data before returning it to the widget, user has to wait a long time to start seeing the results. Moreover, if he decides to close the widget (the component is unmounted), the getDisplayLabelDefinitions keeps building the array by sending requests to the backend - there's no way to cancel that.

The new approach is to use PresentationManager.getDisplayLabelDefinitionsIterator to get the labels. The labels get streamed to the called as soon as the first results' page is retrieved, so user gets to see initial results quickly, while additional pages keep loading in the background. In addition, if the component is unmounted, iteration can be stopped, thus cancelling all further requests to the backend:

const { items } = await manager.getDisplayLabelDefinitionsIterator(requestProps);
for await (const label of items) {
 if (isComponentUnmounted) {
   break;
 }
 // update component's model to render the loaded label
}

Unified selection decoupling from @itwin/presentation-frontend

Ability to provide custom selection storage was added to @itwin/presentation-frontend. Prior to this change unified selection was tightly coupled with @itwin/presentation-frontend and tree component. Only instance keys and tree node keys could be added to unified selection.

With a new version @itwin/presentation-frontend can be initialize with custom selection storage from @itwin/unified-selection package. This provides an API for storing different custom selectable object that later can be used in other components or contribute to hilited items in viewport.

One side effect of these changes is that calling Presentation.selection.getSelection immediately after Presentation.selection.(add*|replace*|remove*|clear*) will not return the latest selection. This was never guaranteed and to get latest selection after any change Presentation.selection.selectionChange event should be used.

import { createStorage } from "@itwin/unified-selection";
import { Presentation } from "@itwin/presentation-frontend";

const unifiedSelectionStorage = createStorage();

await Presentation.initialize({
  selection: {
    storage: unifiedSelectionStorage,
  },
});

// existing unified selection API can be used to add keys
Presentation.selection.addToSelection("SelectionSource", imodel, [{ id: "0x1", className: "Schema:Class" }]);

// unified selection storage can be used directly to add custom selectable objects
unifiedSelectionStorage.addToSelection({
  iModelKey: imodel.key,
  source: "SelectionSource",
  selectables: [{
    identifier: "customSelectable",
    loadInstanceKeys: () => loadKeysFromCustomSource(),
    // additional data can be stored to share between components using unified selection
    data: additionalData
  }],
});

Fixed bounding box types

The bbox properties of Placement2dProps and Placement3dProps were incorrectly typed as LowAndHighXY and LowAndHighXYZ, respectively. In actuality, they may also be of the type { low: number[]; high: number[]; }, and will always be in this form when returned from the backend by functions like getElementProps. The types have been adjusted to LowAndHighXYProps and LowAndHighXYZProps to reflect this.

An analogous adjustment was made to the bbox property of GeometryPartProps.

Last Updated: 03 April, 2024