Recently Pranav Jain and I attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video (https://lang.video/)is a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.
The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.
Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.
This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.
Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.