-
Notifications
You must be signed in to change notification settings - Fork 4
Home
With the advent of technology, medical sciences have also seen an advancement and improvement from the traditional approaches used in various fields. Histopathology is yet another field which has been, and can be further benefitted from the technology. Traditionally, after the slide development the pathologists were required to look at each of the slides individually in order to diagnose. But now, using Volume Rendering techniques, one could generate a 3 dimensional model of the tissue using the slides enabling better histopathologic analysis.
We aim to create an application which provides a user with the following functionalities:
- Send a set of virtual slides (.svs, .tif, etc.)
- Generate a 3D volume using the provided set of slides
- Given that the slides can be very large in size, come up with a suitable architecture to manage data.
- Provide a set of UI based functionalities:
- Add a Slicing feature using an arbitrary plane or axis-aligned planes
- View the Cross-Section
- Select a sub-volume
- Viewing certain artifacts based on their color. Refer to this for further information.
- Draw a ROI
- Zoom/Scaling the volume with replacement of the current resolution image with an appropriate higher resolution section.
- Annotations in 3D → provide an option to annotate artifacts in 3D
- Some WSI's provide polylines → Generate a marked region in 3D rendering from a set of points.
- Add a Slicing feature using an arbitrary plane or axis-aligned planes
- Alignment of different slides
- When we stack multiple slides, it is important to understand which section of each slide corresponds to a section in another.
- Furthermore, two slides created from the same tissue may not be aligned from corner to corner so it is important to correct such misalignments
- Refer to the inviwo documentation.
Ensure the ubuntu version supports Qt
- Use the Cmake GUI to select the necessary modules and add
CMAKE_PREFIX_PATH
-
For any
module not found
error:- Run
apt-file search <module name>
- Install the suggested modules
-
Example
Qt5config.make not found
$ apt-file search Qt5config $ sudo apt-get install qtbase5-dev qtdeclarative5-dev
- Run
-
If certain issues still persist, refer to the Slack Channel of Inviwo.
-
Inviwo Documentation provides simple approaches for it
-
If the new module requires a dependency on another module add them to
depends.cmake
Ensure that there are no circular dependency - if Module A has dependency on Module B, Module B can not have a dependency on Module A. It expects the dependency graph to be a DAG.
Add an image of the network, and upload the .inv file along with it.
Currently a user gets the following features:
- A user can provide slides in a supported formated, which will generate a 3D volume. Currently, for the purpose of demo, we are replicating a single slide and stacking them.
3D volume redering and Axis-aligned volume cropping
-
User can choose from
Voxel-value classification
as can be seen above, or aTransfer Function classification
- which has been modified from the original processor.This method allows user to select colors from the original slide, and render only those artifacts with the same color.
A Threshold property lets the user selects a radius around the mouse-click from which to capture unique colors.
Selecting the required colors and removing the rest to render only the cells
-
A user can perform arbitrary plane slicing of the volume and view the cross-section.
Defining the plane using the plane equation by defining the plane point and normal to the plane
- For understanding any of the classes, inviwo documentation provides with an inheritance chart which makes it easier to understand.
- Inviwo uses an indirect approach to setting up the shaders variables as is explained for the VolumeRaycaster in the section below.
-
Understanding the existing Volume Raycaster and the supporting fragment shader
The existing processor uses transfer function to determine how the output looks. But we had to come up with a different approach.
The 4*256 grid shown at the bottom of the image represents the isovalues. The isovalues represent the intensity. Say we determine an *x-y* range of isovalues representing and artifact say a cell. But the transfer doesn't provide us with any information about the colour nor the position, which is a major drawback of a transfer function. There maybe another entity with same intensity at the same point, so we can not assign them the same colour.
**A probable solution:**
For any color we have three channels (excluding the alpha value). Split the single transfer function into three different transfer function with each of the transfer functions taking their values from each channel(r-g-b) of the original color. Assign the nodes the original color. This way we are able to specify the opacity for specific colors.
---
**How is the transfer function applied?**
```cpp
void MultichannelRaycaster::initializeResources() {
utilgl::addShaderDefines(shader_, raycasting_);
utilgl::addShaderDefines(shader_, camera_);
utilgl::addShaderDefines(shader_, lighting_);
utilgl::addShaderDefines(shader_, positionIndicator_);
utilgl::addShaderDefinesBGPort(shader_, backgroundPort_);
if (volumePort_.hasData()) {
size_t channels = volumePort_.getData()->getDataFormat()->getComponents();
auto tfs = transferFunctions_.getPropertiesByType<TransferFunctionProperty>();
for (size_t i = 0; i < tfs.size(); i++) {
tfs[i]->setVisible(i < channels ? true : false);
}
std::stringstream ss;
ss << channels;
shader_.getFragmentShaderObject()->addShaderDefine("NUMBER_OF_CHANNELS", ss.str());
std::stringstream ss2;
for (size_t i = 0; i < channels; ++i) {
ss2 << "color[" << i << "] = APPLY_CHANNEL_CLASSIFICATION(transferFunction" << i + 1
<< ", voxel, " << i << ");";
}
shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());
shader_.build();
}
}
```
Look at the highlighted part of the code. Here a string `ss2` is being generated which is setting color to a GLSL key `APPLY_CHANNEL_CLASSIFICATION(transferFunction, voxel, channel)` . This is replaced by the following code as defined in `shaderutils.cpp` .
```cpp
// classification (default (red channel) or specific channel)
std::string_view value;
std::string_view valueMulti;
switch (property.classification_.get()) {
case RaycastingProperty::Classification::None:
value = "vec4(voxel.r)";
valueMulti = "vec4(voxel[channel])";
break;
case RaycastingProperty::Classification::TF:
value = "applyTF(transferFunc, voxel.r)";
valueMulti = "applyTF(transferFunc, voxel, channel)";
break;
case RaycastingProperty::Classification::Voxel:
default:
value = "voxel";
valueMulti = "voxel";
break;
}
const std::string_view key = "APPLY_CLASSIFICATION(transferFunc, voxel)";
const std::string_view keyMulti =
"APPLY_CHANNEL_CLASSIFICATION(transferFunc, voxel, channel)";
shader.getFragmentShaderObject()->addShaderDefine(key, value);
shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti);
```
Notice that the `addShaderDefine()` sets the value of the key, as was generated above, to the value depending on the inputs. We define a `bgKey` which is replaced by its value later.
```cpp
void addShaderDefinesBGPort(Shader& shader, const ImageInport& port) {
std::string_view bgKey = "DRAW_BACKGROUND(result,t,tIncr,color,bgTDepth,tDepth)";
if (port.isConnected()) {
shader.getFragmentShaderObject()->addShaderDefine("BACKGROUND_AVAILABLE");
shader.getFragmentShaderObject()->addShaderDefine(
bgKey, "drawBackground(result,t,tIncr,color,bgTDepth,tDepth)");
} else {
shader.getFragmentShaderObject()->removeShaderDefine("BACKGROUND_AVAILABLE");
shader.getFragmentShaderObject()->addShaderDefine(bgKey, "result");
}
}
```
Lets look at the `applyTF` function definition.
```glsl
#ifndef IVW_CLASSIFICATION_GLSL
#define IVW_CLASSIFICATION_GLSL
vec4 applyTF(sampler2D transferFunction, vec4 voxel) {
return texture(transferFunction, vec2(voxel.r, 0.5));
}
vec4 applyTF(sampler2D transferFunction, vec4 voxel, int channel) {
return texture(transferFunction, vec2(voxel[channel], 0.5));
}
vec4 applyTF(sampler2D transferFunction, float intensity) {
return texture(transferFunction, vec2(intensity, 0.5));
}
#endif // IVW_CLASSIFICATION_GLSL
```
Here we get the values for each of the voxels based on the transfer function. Now that we the calculated values let's look at what `shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());` does.
```glsl
while (t < tEnd) {
samplePos = entryPoint + t * rayDirection;
voxel = getNormalizedVoxel(volume, volumeParameters, samplePos);
// macro defined in MultichannelRaycaster::initializeResources()
// sets colors;
**SAMPLE_CHANNELS;**
result = DRAW_BACKGROUND(result, t, tIncr, backgroundColor, bgTDepth, tDepth);
result = DRAW_PLANES(result, samplePos, rayDirection, tIncr, positionindicator, t, tDepth);
if (color[0].a > 0 || color[1].a > 0 || color[2].a > 0 || color[3].a > 0) {
// World space position
vec3 worldSpacePosition = (volumeParameters.textureToWorld * vec4(samplePos, 1.0)).xyz;
gradients = COMPUTE_ALL_GRADIENTS(voxel, volume, volumeParameters, samplePos);
for (int i = 0; i < NUMBER_OF_CHANNELS; ++i) {
color[i].rgb =
APPLY_LIGHTING(lighting, color[i].rgb, color[i].rgb, vec3(1.0),
worldSpacePosition, normalize(-gradients[i]), toCameraDir);
result = APPLY_COMPOSITING(result, color[i], samplePos, voxel, gradients[i], camera,
raycaster.isoValue, t, tDepth, tIncr);
}
}
// early ray termination
if (result.a > ERT_THRESHOLD) {
t = tEnd;
} else {
t += tIncr;
}
}
```
Notice the `SAMPLE_CHANNELS` . This is the variable whose value is set to the `ss2` string in *multichannelraycaster.cpp*. So now in our code we have the `color` variable whose value is set by the transfer functions.
**How is the Transfer Function setup in the code for a VolumeRaycaster?**
Its definition is in `volumeraycaster.cpp` . We don't define a transfer function directly, but use `IsoTFProperty` which consists of a transfer function as its attribute. In code, its object is `isotfComposite_` . Now in `initializeResources()` function, `utilgl::addDefines()` is called which as defined in *line 489* `shaderutils.cpp` sets the property as follows:
```cpp
void addShaderDefines(Shader& shader, const IsoTFProperty& property) {
addShaderDefines(shader, property.isovalues_);
}
void addShaderDefines(Shader& shader, const IsoValueProperty& property) {
const auto isovalueCount = property.get().size();
// need to ensure there is always at least one isovalue due to the use of the macro
// as array size in IsovalueParameters
shader.getFragmentShaderObject()->addShaderDefine(
"MAX_ISOVALUE_COUNT", StrBuffer{"{}", std::max<size_t>(1, isovalueCount)});
shader.getFragmentShaderObject()->setShaderDefine("ISOSURFACE_ENABLED",
!property.get().empty());
}
```
So it just sets `MAX_ISOVALUE_COUNT`, `ISOSURFACE_ENABLED` . So, as of now the transfer function attribute of the `isotfComposite_` hasn't been used yet.
Then in the `raycast()` the `utilgl::bindAndSetUniforms(shader_, units, isotfComposite_)` is called which is where the transfer function comes into play whose definition goes into `textureutils.cpp`
```cpp
void bindTexture(const IsoTFProperty& property, const TextureUnit& texUnit) {
if (auto tfLayer = property.tf_.get().getData()) {
auto transferFunctionGL = tfLayer->getRepresentation<LayerGL>();
transferFunctionGL->bindTexture(texUnit.getEnum());
}
}
void bindAndSetUniforms(Shader& shader, TextureUnitContainer& cont, const IsoTFProperty& property) {
TextureUnit unit;
bindTexture(property, unit);
shader.setUniform(property.tf_.getIdentifier(), unit);
cont.push_back(std::move(unit));
}
```
-
Understanding the CustomVolumeRaycaster
Since the method of transfer function presented several issues, we decided to come up with a different, simpler approach. The goal was now to render only those colors specified by the user. This doesn't provide a control over opacity yet, but that can be a future improvement.
First we wrote a custom fragment shader to be used for this purpose. The important function here was
getColorVal
.vec4 getColorVal(vec4 colorArray[MAX_COLORS], vec4 voxel) { if (colorLen == 0)return voxel; for(int i=0; i< colorLen;i++) { if(voxel.r <= colorArray[i].r + 0.01 && voxel.r >= colorArray[i].r - 0.01) { if (voxel.g <= colorArray[i].g + 0.01 && voxel.g >= colorArray[i].g - 0.01) { if(voxel.b <= colorArray[i].b + 0.01 && voxel.b >= colorArray[i].b - 0.01) { if(colorArray[i].a != 0.0) { return voxel;} else { return vec4(0.0,0.0,0.0,0.0);} } } } } return vec4(0.0,0.0,0.0,0.0); }
One thing to take care of is that we must either clear the memory created earlier in the fragment shader for storing the selected color values, or we must ensure that we don't iterate over memory that is not needed.
Next we defined a new function in
shaderutils.cpp
to accomodate the changes, with a similar reasoning as explained earliervoid addShaderDefines(Shader& shader, const CustomRaycastingProperty& property) { { // rendering type switch (property.renderingType_.get()) { case CustomRaycastingProperty::RenderingType::Dvr: default: shader.getFragmentShaderObject()->addShaderDefine("INCLUDE_DVR"); shader.getFragmentShaderObject()->removeShaderDefine("INCLUDE_ISOSURFACES"); break; } } { // classification (default (red channel) or specific channel) std::string_view value; std::string_view valueMulti; switch (property.classification_.get()) { case CustomRaycastingProperty::Classification::None: value = "vec4(voxel.r)"; valueMulti = "vec4(voxel[channel])"; break; case CustomRaycastingProperty::Classification::TF: value = "getColorVal(pointValue, voxel)"; valueMulti = "getColorVal(pointValue, voxel, channel)"; break; case CustomRaycastingProperty::Classification::Voxel: default: value = "voxel"; valueMulti = "voxel"; break; } const std::string_view key = "APPLY_CLASSIFICATION(pointValue, voxel)"; const std::string_view keyMulti = "APPLY_CHANNEL_CLASSIFICATION(pointValue, voxel, channel)"; shader.getFragmentShaderObject()->addShaderDefine(key, value); shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti); }
Keep in mind to add any new shaders to the CMakeLists.txt
-
Understanding Custom Pixel Value Processor
The
getRegionColors
function is the main function for this processor. Given any coordinates, it appends the unique colors in a vector.void CustomPixelValue::getRegionColors(size2_t pos ) { auto img = inport_.getData(); auto dims = img->getDimensions(); auto numCh = img->getNumberOfColorLayers(); for (size_t i = 0; i < numCh; i++) { img->getColorLayer(i) ->getRepresentation<LayerRAM>() ->dispatch<void, dispatching::filter::All>([&](const auto layer) { using ValueType = util::PrecisionValueType<decltype(layer)>; using Comp = typename util::value_type<ValueType>::type; const auto data = layer->getDataTyped(); const auto im = util::IndexMapper2D(dims); auto value = data[im(pos)]; auto v = util::glm_convert<glm::vec<4, Comp>>(value); v = util::applySwizzleMask(v, img->getColorLayer(i)->getSwizzleMask()); auto vf = util::glm_convert<glm::vec<4, float>>(v); if constexpr (std::is_integral_v<Comp>) { vf /= std::numeric_limits<Comp>::max(); } std::vector<inviwo::vec4> selectedColorVal; for(auto p : selectedPixelsData_.getProperties()) { auto t = static_cast<FloatVec4Property*>(p); selectedColorVal.push_back(t->get()); } if(std::find(selectedColorVal.begin(), selectedColorVal.end(), vf) == selectedColorVal.end()) { std::string selectedPixelsIdentifier = "Color" + std::to_string(MaxColorVal_+1); std::string selectedPixelsName = "Color " + std::to_string(MaxColorVal_+1); selectedPixelsData_.addProperty(new FloatVec4Property(selectedPixelsIdentifier, selectedPixelsName, vf,vec4(0), vec4(1), vec4(std::numeric_limits<float>::epsilon()), InvalidationLevel::Valid,PropertySemantics::Color)); MaxColorVal_++; } areaPixelsData_ = selectedColorVal; }); } return; }
Based on the GPU there is a memory limit, hence we can not store a large number of colors. Current the maximum array_size that is defined is 100.
-
Shortcomings and Possible Solutions
- As mentioned above, there is a memory limitation on the amount of color values that can be passed to the shader. Such a limitation affects the user experience.
- As of now, the most efficient solution is utilising the method explained here. This can be achieved by modifying the already existing
multichannelraycaster
. Though the only issue that it will bring forth is a lot of work for the user to set the color of each of the nodes of transfer function, that too in three separate transfer function widgets. This can be solved but that would require creating a custom transfer function setter.
The aim was to show a visual plane which crops the volume. The issue that I found was that the Volume Raycaster expected the coordinates belonging to [-1,1] while the property value stored in the MeshClipping processor was in the original coordinates. But testing out with the normalised coordinates didn't seem to provide us with the visual plane.
The plane appears at an offset from the cropping plane and isn't visible for all values
A better solution would be to explore the VolumeSliceGL
processor. As can be seen in the diagnosticlayout_head.inv
example, this processor correctly shows the planes. The task shall only be to allow for arbitrary plane equations instead of axis-aligned.
Understanding Histopathology and Volume Rendering
- One major problem to be solved is to determine what slices to pull out from the OpenSlide API when a user is not looking orthogonally at the volume. Currently, we are passing the coordinates of a ROI to the OpenSlide API and pulling out the higher resolution slide from there. Assume the condition when a user slices a volume and zooms in onto the sliced region. We must figure out how to pull out the higher resolution image of that region.
- Annotations could be a great addition to the application. It could utilise processors such as
DrawLines
,TextOverlay
andPixelValue
(if one wishes to highlight the region).