Effective Elasticsearch Mapping

Finally, after more than a year of hiatus, I start writing again on this blog! This time, I’ll write about Effective Elasticsearch Mapping, i.e. some tips on how to define our mapping in Elasticsearch 1.7.x. We just started using Elasticsearch 2.0 in LARC (my current workplace :)), so I’ll do my best to update this list accordingly as we grow our experience in Elasticsearch 2.0.

1. Use appropriate data type for an ES field

Elasticsearch will try its best to determine the data type of an unknown field when we index a document. However, it’s better to use appropriate data type for ES field, hence define your mapping early (i.e. before you start to index the documents) and use index template1.

Some of the data types that you should use:

  1. date type for timestamp.
    Don’t use long or string type although they may look promising. Using date type allow us to use the index easily in Kibana and support time-aggregation without the needs of scanning.
  2. geo_point type for geo location (i.e. latitude-longitude) Continue reading Effective Elasticsearch Mapping

  1. https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html []

Configuring Logging in Play 2.2.x

Well, actually it is pretty straightforward, just follow this Play 2.2.x documentation. But there is a caveat that costs me several hours to resolve.

Play Framework
Play Framework

Based on Play 2.2.x documentation , we should use `-Dlogger.resource` after the command `start` inside Play console, i.e

[OS-console] $ play
[info] Loading project definition from ....
[info] ...
..... //initialization message from Play
[play-console] $ start -Dlogger.resource=my-logging-configuration.xml

But what if we want to invoke it outside Play console? Does  this command below work properly?

play start -Dlogger.resource=my-logging-configuration.xml

Well the answers is NO!

So, I add some code on my controller to print the logging configuration file (which is logback configuration file) and Play does not consider the VM arguments (i.e. -Dlogger.resource) using above command. Logback  uses the default Play’s logger.xml in  `$PROJECT_HOME/play-2.2.x/repository/local/com.typesafe.play/play_2.10/2.2.x/jars/play_2.10.jar!/logger.xml`

So, to resolve this, we should invoke the “play start” command as below:

play -Dlogger.resource=my-logging-config.xml -Dother.vmarg=val start

That’s all for this month! See you (hopefully earlier than) next month!

SAMOA – Scalable Advanced Massive Online Analysis

In this post, I’ll give a quick overview of upcoming distributed streaming machine learning framework,  Scalable Advanced Massive Online Analysis (SAMOA). As I  mentioned before, SAMOA is part of my and Antonio’s theses with Yahoo! Labs Barcelona.

What is SAMOA?

SAMOA is a tool to perform mining on big data streams. It is a distributed streaming machine learning  (ML) framework, i.e. it is a Mahout but for stream mining. SAMOA contains a programing abstraction for distributed streaming ML algorithms (refer to this post for stream ML definition) to enable development of new ML algorithms without dealing with the complexity of underlying streaming processing engines (SPE, such as Twitter Storm and S4).  SAMOA also provides extensibility in integrating new SPEs into the framework. These features allow SAMOA users to develop distributed streaming ML algorithms once and they can execute the algorithms in multiple SPEs, i.e. code the algorithms once and execute them in multiple SPEs.

Continue reading SAMOA – Scalable Advanced Massive Online Analysis

Parallelism for Distributed Streaming ML


Warning: Illegal string offset 'width' in /home/arinto/otnira.com/wp-content/plugins/jetpack/class.photon.php on line 473

Warning: Illegal string offset 'height' in /home/arinto/otnira.com/wp-content/plugins/jetpack/class.photon.php on line 474

In this post, we will revisit several parallelism types that can be applied to modify conventional streaming (or online) machine learning algorithms into distributed and parallel ones. This post is a quick summary of half of chapter 4 of my thesis (which I completed one month ago! yay!).

Data Parallelism

Data Parallelism parallelize and distribute the algorithms based on the data. There are two types of data parallelism, they are Vertical Parallelism and Horizontal Parallelism.

Horizontal Parallelism

Horizontal parallelism splits the data based on the quantity of the data i.e. same amount of data subset goes into the parallel computation. If let’s say we have 4 components that perform parallel computation, and we have 100 data, then each component computes 25 data. As shown in figure below, each parallel component has local machine learning (ML) model. Every parallel component then performs periodical update into the global ML model. Horizontal Parallelism

This type of parallelism is often used to provide horizontal scalability. In online learning context, horizontal parallelism is suitable when the data arrival rate is very high. However, horizontal parallelism needs high number of memory since it needs to replicate the online machine learning model in every parallel computation element. Another caveat for horizontal parallelism is the additional complexity that introduced when propagating the model updates between parallel computation element. Example of horizontal parallelism in distributed streaming machine learning algorithm is Ben-Haim and Yom-Tov’s work about streaming parallel decision tree algorithm.

Continue reading Parallelism for Distributed Streaming ML

Bootstrapping Twitter Storm

I have chances to use Twitter Storm for my thesis and in this post I would like  to give some pointers about it. I hope this will be useful for those who are starting to use Storm in their project 🙂

Of course I am NOT talking about this movie :D
Of course I am NOT talking about this movie 😀

Well, I tried to search for Twitter Storm logo, but I could not find it. Then suddenly I remembered about the movie pictured above. Okay, let’s get back to business.

What is Twitter Storm?

Twitter Storm is a distributed streaming computation framework. It does, for real-time-processing(via streaming), what Hadoop’s MapReduce (MR) does for batch processing. The main reason why it exists is in inflexibility of Hadoop MR in handling stream processing, i.e. it’s too complex and error-prone to configure Hadoop MR in handling streaming data (for more detail, watch the first five minutes of this video). Continue reading Bootstrapping Twitter Storm

Distributed Streaming Classification: Related Work

In this post, I plan to write some quick recap of related works in Distributed Streaming Classification, focusing on decision tree induction. It is still related to my thesis in Distributed Streaming Machine Learning Framework. I divide this post into four sections: Classification, Distributed Classification, Streaming Classification, and Distributed Streaming Classification. Without further ado, let’s start with Classification

Classification

Classification is a type machine learning task which infers a function from labeled training data. This function is used to predict the label (or class) of testing data. Classification is also called as supervised learning since we use the actual class output (the ground truth) to supervise the output of our classification algorithm. Many classification algorithms have been developed such as tree-based algorithms (C4.5 decision tree, bagging and boosting decision tree, decision stump, boosted stump, random forest etc), neural-network, Support Vector Machine (SVMs), rule-based algorithms(conjunctive rule, RIPPER, PART, PRISM etc), naive bayes, logistic regression and many more.

Continue reading Distributed Streaming Classification: Related Work

Hoeffding Tree for Streaming Classification

In the previous post, we have summarized C4.5 decision tree induction. Well, since my thesis is about distributed streaming machine learning, it’s time to talk about streaming decision tree induction and I think it’s better start with defining “streaming machine learning” in general.

Streaming Machine Learning

Streaming machine learning can be interpreted as performing machine learning in streaming setting. In this case, streaming setting is characterized by:

  • High data volume and rate, such as transactions logs in ATM and credit card operations, call log in telecommunication company, and social media data i.e. Twitter tweet stream or Facebook status update stream
  • Unbounded, which means these data always arrive to our system and we won’t be able to fit them in memory or disk for further analysis with the techniques. Therefore, this characteristic implies we are limited to analyse the data once and there is little chance to revisit the data

Continue reading Hoeffding Tree for Streaming Classification