This is a repository for research on indoor localization based on wireless fingerprinting techniques. For more details, please visit XJTLU SURF project home page.
- Implement a multi-label classifier to address the issues described on 2017-08-17: 3 building and 5 floor identifiers are one-hot encoded into an 8-dimensional vector (e.g., '001|01000') and classified with different class weights (e.g., 30 for buidlings and 1 for floors); the resulting one-hot-encoded vector is split into 3-dimensional building and 5-dimensional floor vectors and the index of a maximum value of each vector is returned as a classified class (results).
- Still, need to optimize parameters a lot.
- Implement a new program, which calculates accuracies separately for building and floor classification, to investigate the hierarchical nature of the classification problem at hand; the deep-learning-based place recognition system described in the key paper1 does not take into account this and carries out classification based on flattened labels (i.e., (building, floor) -> 'building-floor'). We are now considering two options to guarantee 100% accuracy for the building classification:
- Hierarchical classifier with a tree structure and multiple classifiers and data sets, which is a conventional approach and a reference for this investigation.
- One classifier with a weighted loss function2. In our case, however, the loss function does not give a closed-form gradient function, which forces us to use evolutionary algorithms (e.g., genetic algorithm) for training of neural network weights or multi-label classification with different class weights (i.e., higher weights for buildings in our case).
- Today, we further simplified the building/floor classification system by removing a hidden layer from the classifier (therefore no dropout), resulting in the configuration of '520-64-4-13' (including input and output layers) with loss=7.050603e-01 and accuracy=9.234923e-01 (results). This might mean that the 4-dimensional data from the SAE encoder (64-4) can be linearly separable. Due to training of SAE encoder weights for the combined system, however, it needs further investigation.
- We investigated whether a couple of strong RSSs in a fingerprint dominate the classification performance in building/floor classification. After many trials with different configurations, we could obtain more than 90% accuracies with the stacked-autoencoder (SAE) having 64-4-64 hidden layers (i.e., just 4 dimension) and the classifier having just one 128-node hidden layer (results). This implies that a small number of RSSs from access points (APs) deployed in a building/floor can give enough information for the building/floor classification; the localization on the same floor, by the way, would be quite different, where RSSs from possibly many APs have a significant impact on the localization performance.
- We finally obtained more than 90% accuracies from this version, which are comparable to the results of the key paper 1 based on the UJIIndoorLoc Data Set; refer to the multi-class clarification example for classifier parameter settings.
- We replace the activation functions of the hidden-layer from 'tanh' to 'relu' per the second answer to this question (results). Compared to the case with 'tanh', however, the results seem to not improve (a bit in line with the gut-feeling suggestions from this).
- We first tried a feed-forward classifier with just one hidden layer per the comments from this (results). (* nh: number of hidden layer nodes, dr: dropout rate, loss: categorical cross-entropy, acc: accuracy *).
1 M. Nowicki and J. Wietrzykowski, "Low-effort place recognition with WiFi fingerprints using deep learning," arXiv:1611.02049v2 [cs.RO] (arXiv)
2 T. Yamashita et al., "Cost-alleviative learning for deep convolutional neural network-based facial part labeling," IPSJ Transactions on Computer Vision and Applications, vol. 7, pp. 99-103, 2015. (DOI)