Skip to content

Commit

Permalink
Updated README
Browse files Browse the repository at this point in the history
  • Loading branch information
elbaulp committed Sep 27, 2018
1 parent ba2fc79 commit f498269
Showing 2 changed files with 44 additions and 1 deletion.
43 changes: 42 additions & 1 deletion README.org
Original file line number Diff line number Diff line change
@@ -2,6 +2,48 @@

My MSc on Data Science final project. This is a library for Massive Data Streaming analysis using Apache Flink

* Implemented algorithms
** Feature Selection
*** Fast Correlation-Based Filter (FCBF)
FCBF is a multivariate feature selection method where the class relevance and the dependency between each feature pair are taken into account. Based on information theory, FCBF uses symmetrical uncertainty to calculate dependencies of features and the class relevance. Starting with the full feature set, FCBF heuristically applies a backward selection technique with a sequential search strategy to remove irrelevant and redundant features. The algorithm stops when there are no features left to eliminate.

H.-L. Nguyen, Y.-K. Woon, W.-K. Ng, L. Wan, Heterogeneous ensemble for feature drifts in data streams, in: Proceedings of the 16th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining - Volume Part II, PAKDD’12, 2012, pp. 1–12.
*** Online Feature Selection (OFS)
OFS proposes an ε-greedy online feature selection method based on weights generated by an online classifier (neural networks) which makes a trade-off between exploration and exploitation of features.

J. Wang, P. Zhao, S. Hoi, R. Jin, Online feature selection and its applications, IEEE Transactions on Knowledge and Data Engineering 26 (3) (2014) 698–710.
*** Katakis' FS
This FS scheme is formed by two steps: a) an incremental feature ranking method, and b) an incremental learning algorithm that can consider a subset of the features during prediction (Naive Bayes).

I. Katakis, G. Tsoumakas, I. Vlahavas, Advances in Informatics: 10th Panhellenic Conference on Informatics, PCI 2005, Springer Berlin Heidelberg, 2005, Ch. On the Utility of Incremental Feature Selection for the Classification of Textual Data Streams, pp. 338–348.
** Discretization
*** Incremental Discretization Algorithm (IDA)
Incremental Discretization Algorithm (IDA) approximates quantile-based discretization on the entire data stream encountered to date by maintaining a random sample of the data which is used to calculate the cut points. IDA uses the reservoir sampling algorithm to maintain a sample drawn uniformly at random from the entire stream up until the current time.

G. I. Webb. 2014. Contrary to Popular Belief Incremental Discretization can be Sound, Computationally Efficient and Extremely Useful for Streaming Data. In Proceedings of the 2014 IEEE International Conference on Data Mining (ICDM '14). IEEE Computer Society, Washington, DC, USA, 1031-1036.
*** Partition Incremental Discretization algorithm (PiD)
PiD performs incremental discretization. The basic idea is to perform the task in two layers. The first layer receives the sequence of input data and keeps some statistics on the data using many more intervals than required. Based on the statistics stored by the first layer, the second layer creates the final discretization. The proposed architecture processes streaming exam ples in a single scan, in constant time and space even for infinite sequences of examples.

J. Gama, C. Pinto, Discretization from data streams: Applications to histograms and data mining, in: Proceedings of the 2006 ACM Sympo sium on Applied Computing, SAC ’06, 2006, pp. 662–667.
*** Local Online Fusion Discretizer (LOFD)
LOFD \cite{lofd} is an online, self-adaptive discretizer for
streaming classification. It smoothly adapts its interval limits
reducing the negative impact of shifts and analyze interval
labeling and interaction problems in data streaming. Interaction
discretizer-learner is addressed by providing 2 alike solutions.
The algorithm generates an online and self-adaptive discretization
solution for streaming classification which aims at reducing the
negative impact of fluctuations in evolving intervals.

S. Ramírez-Gallego, S. García, F. Herrera, Online entropy-based
discretization for data streaming classification, Future Generation
Computer Systems, Volume 86, 2018, Pages 59-70, ISSN 0167-739X,
https://doi.org/10.1016/j.future.2018.03.008.
(http://www.sciencedirect.com/science/article/pii/S0167739X17325815)
Keywords: Data stream; Concept drift; Data preprocessing; Data
reduction; Discretization; Online learning


* References
- [[https://github.com/sramirez/MOAReduction][MOAReduction]] By [[https://github.com/sramirez/][@sramirez]]
- Some DataStructures like =IntervalHeap= has been adapted from [[https://github.com/allenbh/gkutil_java/blob/master/src/gkimfl/util/IntervalHeap.java][allenbh/gkutil_java]], by [[https://github.com/allenbh/][@allenbh]].
@@ -15,7 +57,6 @@ This is a list of all resources that helped me to build this library:
- [[https://github.com/tmadl/sklearn-expertsys/blob/master/Discretization/MDLP.py][tmadl/sklearn-expertsys: Discretization MDLP]]
- [[https://github.com/shiralkarprashant/FCBF][FCBF python implementation]]


* Used DataSets
- [[https://archive.ics.uci.edu/ml/datasets/Iris/][Iris]]
- [[https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#svmguide3][SvmGuide3]]
Original file line number Diff line number Diff line change
@@ -17,6 +17,7 @@

package com.elbauldelprogramador.featureselection

import com.elbauldelprogramador.utils.FlinkUtils
import breeze.linalg.norm
import org.apache.flink.api.scala._
import org.apache.flink.ml.common.{ LabeledVector, Parameter, ParameterMap, WeightVector }
@@ -121,6 +122,7 @@ object OFSGDTransformer {

// TODO: Better way to compute dimensions
val dimensionsDS = input.map(_.vector.size).reduce((_, b) b).collect.head
val dimensionsDS = FlinkUtils.numAttrs(input)

val values = Array.fill(dimensionsDS)(0.0)
var weights = WeightVector(DenseVector(values), .0).weights.asBreeze

0 comments on commit f498269

Please sign in to comment.