This is the follow-up-post of this post about EEDC project this semester. Well, I would like to have more time to polish my slides but that’s fine. It’s already past 🙂
I used historical perspective to discuss about the “rise” of network virtualization, in this context I focused on SDN (Software-Defined Networking) and it is all started in 2007. At that time, network are faster, but not better. It has several limitations such as high complexity in maintaining it, high possibility of policy inconsistent across devices in the network, inability to scale, and dependency on vendor. On the other hand, the need of new network architecture is crucial due to traffic pattern change (not only from client to server or vice versa, but also between nodes in server cluster), consumerization of IT, and rise of cloud service and Big Data. Continue reading Rise of Network Virtualization – Final
It has been a week since my last post :). Well, I was pretty occupied. Dealing with deadlines, and impromptu soccer + Paris trip :p. Well, back to business now. Here is my latest slide for EEDC assignment. It discusses about determining data center location, based on this paper, Intelligent Placement of Datacenters for Internet Services, by I. Goiri et al. It is pretty interesting because this kind of stuff is usually confidential information for the “juggernaut” of internet (Google, Facebook etc).
I read an article titled “Architecting Cloud Scale Identity Fabric” about concept of Identity Management of a Service as part of EEDC assignment. This article is written by Eric Olden, from Symplified. The main thing about this article is about the need of a service to manage user identity in the cloud. Well, I think this diagram from Symplified website is worth more than thousand words:
This time, our group needed to prepare presentation about Apache Flume for EEDC homework. Flume is intended to solve challenges in safely transferring huge set of data from a node (example: log files in company web servers) to data store (example: HDFS, Hives, HBase, Cassandra etc etc).
Well, for a simple system with relatively small data set, we usually customize our own solution to do this job, such as to create some script to transfer the log to database. However, this kind of ad-hoc solution is difficult to make it scalable because usually it is created very tailored into our system. It sometimes suffers from problem in manageability, especially when the original programmer or engineer who created the system left the company. It is also often difficult to extend and, furthermore it may have problem in reliability due to some bugs during the implementation.
Last week, I have another assignment of Execution Environment of Distributed Computing course to create an elevator pitch about REST. More specifically, it is a short presentation (only 5 minutes) to convince your audience to use REST instead of SOAP.