Skip to content

Commit

Permalink
Merge pull request #193 from aferust/master
Browse files Browse the repository at this point in the history
update docs finally
  • Loading branch information
aferust authored May 20, 2024
2 parents 99791a0 + 32d8136 commit b3168ee
Show file tree
Hide file tree
Showing 18 changed files with 299 additions and 101 deletions.
4 changes: 2 additions & 2 deletions docs/builddoc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set DDOCFILE=dcv.ddoc
cd ../

dub --build=docs --compiler=ldc2 dcv:core
dub --build=docs --compiler=ldc2 dcv:io
dub --build=docs --compiler=ldc2 dcv:plot
dub --build=docs --compiler=ldc2 dcv:imageio
dub --build=docs --compiler=ldc2 dcv:linalg

cd docs
24 changes: 16 additions & 8 deletions source/dcv/core/image.d
Original file line number Diff line number Diff line change
@@ -1,17 +1,28 @@
/**
Module implements Image utility class, and basic API for image manipulation.
Image class encapsulates image properties with minimal functionality. It is primarily designed to be used as I/O unit.
For any image processing needs, image data can be sliced to mir.ndslice.slice.Slice.
For any image processing needs, image data can be sliced to mir.ndslice.slice.Slice.
If an Image instance is initialized with non-null ubyte[] data, this time,
the Image instance behaves like a slice shell, and it does not attempt to deallocate the borrowed data slice.
Example:
----
Image image = new Image(32, 32, ImageFormat.IF_MONO, BitDepth.BD_32);
Slice!(float*, 3, Contiguous) slice = image.sliced!float; // slice image data, considering the data is of float type.
Image image1 = new Image(32, 32, ImageFormat.IF_MONO, BitDepth.BD_32);
Slice!(ubyte*, 3, Contiguous) slice = image1.sliced; // slice image data
// Slice!(ubyte*, 2, Contiguous) slice = image1.sliced2D; // if it is known that the data represents a monochrome image
assert(image.height == slice.length!0 && image.width == slice.length!1);
assert(image.channels == 1);
image = slice.asImage(ImageFormat.IF_MONO); // create the image back from sliced data.
image2 = slice.asImage(ImageFormat.IF_MONO); // create the image back from sliced data.
// asImage allocates with malloc, so it should be freed manually.
destroyFree(image2)
...
// here image3 is does neither copy nor own the someUbyteSlice.
Image image3 = new Image(32, 32, ImageFormat.IF_MONO, BitDepth.BD_32, someUbyteSlice); // or mallocNew!Image(...);
destroy(image3); // or destroyFree(image3) this does not attempt to free someUbyteSlice.
----
Copyright: Copyright Relja Ljubobratovic 2016.
Authors: Relja Ljubobratovic
Authors: Relja Ljubobratovic, Ferhat Kurtulmuş
License: $(LINK3 http://www.boost.org/LICENSE_1_0.txt, Boost Software License - Version 1.0).
*/
module dcv.core.image;
Expand Down Expand Up @@ -260,9 +271,6 @@ public:
Get data array from this image.
Cast data array to corresponding dynamic array type,
and return it.
8-bit data is considered ubyte, 16-bit ushort, and 32-bit float.
Params:
T = (template parameter) value type (default ubyte) to which data array is casted to.
*/
inout auto data()
{
Expand Down
4 changes: 2 additions & 2 deletions source/dcv/features/package.d
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ module dcv.features;
* v0.1 norm:
* harris
* shi-tomasi
* fast (wrap C version by author: http://www.edwardrosten.com/work/fast.html
* fast $(LPAREN)wrap C version by author: http://www.edwardrosten.com/work/fast.html$(RPAREN)
* most popular blob detectors - sift, ???
* dense features - hog
*
* v0.1+:
* other popular feature detectors, descriptor (surf, brief, orb, akaze, etc.)
* other popular feature detectors, descriptor, such as surf, brief, orb, akaze, etc.
*/

public import dcv.features.corner, dcv.features.utils, dcv.features.sift;
49 changes: 32 additions & 17 deletions source/dcv/features/utils.d
Original file line number Diff line number Diff line change
Expand Up @@ -38,24 +38,19 @@ struct Feature
float score;
}

@nogc nothrow:

/**
Extract corners as array of 2D points, from response matrix.
Params:
cornerResponse = Response matrix, collected as output from corner
detection algoritms such as harrisCorners, or shiTomasiCorners.
count = Number of corners which need to be extracted. Default is
-1 which indicate that all responses with value above the threshold
will be returned.
threshold = Response threshold - response values in the matrix
larger than this are considered as valid corners.
cornerResponse = Response matrix, collected as output from corner detection algoritms such as harrisCorners, or shiTomasiCorners.
count = Number of corners which need to be extracted. Default is -1 which indicate that all responses with value above the threshold will be returned.
threshold = Response threshold - response values in the matrix larger than this are considered as valid corners.
Returns:
Lazy array of size_t[2], as in array of 2D points, of corner reponses
RCArray!Pair, as in array of 2D points, of corner reponses
which fit the given criteria.
*/
@nogc nothrow:

auto extractCorners(T)
(
Slice!(T*, 2, Contiguous) cornerResponse,
Expand Down Expand Up @@ -162,9 +157,16 @@ unittest
assert(res[0] == [1, 1]);
}

/++
Returns euclidean distance between feature descriptor vectors.
+/
/**
Compute the Euclidean distance between two feature descriptor vectors.
Params:
desc1 = The first feature descriptor vector of const DescriptorValueType[].
desc2 = The second feature descriptor vector of const DescriptorValueType[].
Returns:
double = The Euclidean distance between the two feature descriptor vectors.
*/
double euclideanDistBetweenDescriptors(DescriptorValueType)(const DescriptorValueType[] desc1, const DescriptorValueType[] desc2)
{
double sum = 0;
Expand All @@ -176,9 +178,22 @@ double euclideanDistBetweenDescriptors(DescriptorValueType)(const DescriptorValu
}

alias FeatureMatch = Tuple!(int, "index1", int, "index2", double, "distNearestNeighbor", double, "nearestNeighborDistanceRatio");
/++
Returns an Array containing matched indices of the given Keypoints with brute force approach.
+/

/**
Finds matching points between two sets of keypoints using the brute force approach.
This function matches keypoints from two sets by comparing each keypoint in the first set with every keypoint in the second set.
Params:
keypoints1 = The first set of keypoints of const ref Array!KeyPoint .
keypoints2 = The second set of keypoints of const ref Array!KeyPoint .
threshold = The distance ratio threshold for determining matches. Defaults to 0.5.
Returns:
Array!FeatureMatch = An array containing matched indices of keypoints. Each element in the array is a tuple
containing the index of the keypoint from the first set, the index of the matching keypoint
from the second set, the distance to the nearest neighbor, and the nearest neighbor distance ratio.
*/
Array!FeatureMatch
find_MatchingPointsBruteForce(KeyPoint)(const ref Array!KeyPoint keypoints1,
const ref Array!KeyPoint keypoints2, double threshold = 0.5)
Expand Down
4 changes: 1 addition & 3 deletions source/dcv/imageio/image.d
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
/**
Module for image I/O.
Copyright: Copyright Relja Ljubobratovic 2016.
Authors: Relja Ljubobratovic
Authors: Relja Ljubobratovic, Ferhat Kurtulmuş
License: $(LINK3 http://www.boost.org/LICENSE_1_0.txt, Boost Software License - Version 1.0).
*/
module dcv.imageio.image;
Expand Down Expand Up @@ -92,8 +92,6 @@ color format. To load original depth or format, set to _UNASSIGNED (ImageFormat.
BitDepth.BD_UNASSIGNED).
return:
Image read from the filesystem.
throws:
Exception and ImageIOException from imageformats library.
*/
Image imread(in string path, ReadParams params = ReadParams(ImageFormat.IF_UNASSIGNED, BitDepth.BD_UNASSIGNED))
{
Expand Down
7 changes: 3 additions & 4 deletions source/dcv/imgproc/convolution.d
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ Following example loads famous image of Lena Söderberg and performs gaussian bl
----
import dcv.imageio.image : imread, ReadParams;
import dcv.core.image : Image, asType;
import dcv.core.image : Image;
import dcv.imgproc.convolution : conv;
Image lenaImage = imread("../data/lena.png", ReadParams(ImageFormat.IF_MONO, BitDepth.BD_8));
auto slice = lenaImage.sliced!ubyte;
auto slice = lenaImage.sliced;
----
... this loads the following image:<br>
$(IMAGE https://github.com/libmir/dcv/blob/master/examples/data/lena.png?raw=true)
----
blurred = slice
.asType!float // convert ubyte data to float.
.as!float // convert ubyte data to float.
.conv(gaussian!float(0.84f, 5, 5)); // convolve image with gaussian kernel
----
Expand Down Expand Up @@ -69,7 +69,6 @@ Params:
prealloc is not of same shape as input input, resulting array will be newly allocated.
mask = Masking input. Convolution will skip each element where mask is 0. Default value
is empty slice, which tells that convolution will be performed on the whole input.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Resulting image after convolution, of same type as input tensor.
Expand Down
9 changes: 0 additions & 9 deletions source/dcv/imgproc/filter.d
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,6 @@ Params:
Default value is EdgeKernel.SIMPLE, which calls calcPartialDerivatives function
to calculate derivatives. Other options will perform convolution with requested
kernel type.
pool = TaskPool instance used parallelise the algorithm.
Note:
Input slice's memory has to be contiguous. Magnitude and orientation slices' strides
Expand Down Expand Up @@ -615,7 +614,6 @@ Params:
mag = Gradient magnitude.
orient = Gradient orientation of the same image source as magnitude.
prealloc = Optional pre-allocated buffer for output slice.
pool = TaskPool instance used parallelise the algorithm.
Note:
Orientation and pre-allocated structures must match. If prealloc
Expand Down Expand Up @@ -684,7 +682,6 @@ Params:
upThresh = upper threshold value after non-maxima suppression.
edgeKernelType = Type of edge kernel used to calculate image gradients.
prealloc = Optional pre-allocated buffer.
pool = TaskPool instance used parallelise the algorithm.
*/
@nogc nothrow
Slice!(RCI!V, 2LU, Contiguous) canny(V, T, SliceKind kind)
Expand Down Expand Up @@ -738,7 +735,6 @@ Params:
sigmaSpace = Spatial sigma value.
kernelSize = Size of convolution kernel. Must be odd number.
prealloc = Optional pre-allocated result image buffer. If not of same shape as input slice, its allocated anew.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Slice of filtered image.
Expand Down Expand Up @@ -861,7 +857,6 @@ Params:
slice = Input image slice.
kernelSize = Square size of median kernel.
prealloc = Optional pre-allocated return image buffer.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Returns filtered image of same size as the input. If prealloc parameter is not an empty slice, and is
Expand Down Expand Up @@ -1122,7 +1117,6 @@ Params:
slice = Input image slice, to be eroded.
kernel = Erosion kernel. Default value is radialKernel!T(3).
prealloc = Optional pre-allocated buffer to hold result.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Eroded image slice, of same type as input image.
Expand Down Expand Up @@ -1187,7 +1181,6 @@ Params:
slice = Input image slice, to be eroded.
kernel = Dilation kernel. Default value is radialKernel!T(3).
prealloc = Optional pre-allocated buffer to hold result.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Dilated image slice, of same type as input image.
Expand Down Expand Up @@ -1216,7 +1209,6 @@ Params:
slice = Input image slice, to be eroded.
kernel = Erosion/Dilation kernel. Default value is radialKernel!T(3).
prealloc = Optional pre-allocated buffer to hold result.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Opened image slice, of same type as input image.
Expand Down Expand Up @@ -1246,7 +1238,6 @@ Params:
slice = Input image slice, to be eroded.
kernel = Erosion/Dilation kernel. Default value is radialKernel!T(3).
prealloc = Optional pre-allocated buffer to hold result.
pool = Optional TaskPool instance used to parallelize computation.
Returns:
Closed image slice, of same type as input image.
Expand Down
2 changes: 1 addition & 1 deletion source/dcv/imgproc/interpolate.d
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Linear interpolation.
Params:
slice = Input slice which values are interpolated.
pos = Position on which slice values are interpolated.
pos0 = Position on which slice values are interpolated.
Returns:
Interpolated resulting value.
Expand Down
4 changes: 3 additions & 1 deletion source/dcv/imgproc/threshold.d
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Image thresholding module.
Copyright: Copyright Relja Ljubobratovic 2016.
Authors: Relja Ljubobratovic
Authors: Relja Ljubobratovic, Ferhat Kurtulmuş
License: $(LINK3 http://www.boost.org/LICENSE_1_0.txt, Boost Software License - Version 1.0).
*/
Expand Down Expand Up @@ -36,6 +36,7 @@ Params:
input = Input slice.
lowThresh = Lower threshold value.
highThresh = Higher threshold value.
inverse = A flag to set output image as a negative binary image
prealloc = Optional pre-allocated slice buffer for output.
Note:
Expand Down Expand Up @@ -109,6 +110,7 @@ Calls threshold(slice, thresh, thresh, prealloc)
Params:
input = Input slice.
thresh = Threshold value - any value lower than this will be set to 0, and higher to 1.
inverse = A flag to set output image as a negative binary image
prealloc = Optional pre-allocated slice buffer for output.
Note:
Expand Down
Loading

0 comments on commit b3168ee

Please sign in to comment.