It was my first time attending the SRECON series and also one of the big step into learning about Site Reliability and Engineering. The conference had jam packed sessions on site reliability, Chaos engineering, Code reviewing culture, Incidents, SLOs and much more.
Resilience Engineering Mythbusting at #srecon Adding a functionality is adding complexity into our systems. Systems will fail for all sorts of reasons, always practise best practises (misnomer) one of them is NOT to deploy on a friday. pic.twitter.com/2h7pv0ePZM
I recently attended React Conf hosted in Henderson, Nevada. The conference was very well put up at The Westin Lake Las Vegas and had this veryyy amazing view.
The conference started keynote by people who were working on the React Core team at Facebook. The very first keynote was “React today and tomorrow” where they talked about popularity of React – how npm downloads are going up and installation of chrome react dev tools are increasing!
React.lazy was announced recently. React.lazy helps to lazy load the components (components let you split the UI into independent reusable pieces and think about each piece in isolation) without breaking the internet!
To build great UIs using React, a few common factors are generally considered:- -Suspense – Simplifying hard things, Idea about data fetching, code splitting, async data dependencies. -Performance – Time Slicing, making sure that important components are rendered first. -Developer tooling – for developers to debug and understand their app for example, by providing developer friendly warnings
Now with React DevTools extension one can inspect and debug trees and Profiler helps in understanding what’s going on internally with the application.
They also talked about the downside of React such as:- -Reusing logic – the logic is split across different life cycles, classes, which are apparently difficult both for humans and machines. -Giant components.
These are not separate problems but symptoms of one problem and the problem is react does not provide simple light weight stateful primitive simpler than a class component.
The sessions also talked about Declarative Animation, a declarative API – Pose which seemed really cool to implement. To differentiate a Declarative and Imperative code could be somewhat understood as:-
Declarative code is quite descriptive, it’s often an abstraction. In Imperative code understanding what something is doing is same as understanding what something is doing, step by step logic. If you want to make any contribution towards an imperative codebase, you would need to understand how different components are wired up.
Sooo. What’s new in v2? -Support for PostCSS, previously there was only autoprefixer, which wouldn’t compile new CSS features but now you could use modern CSS features without worrying about legacy browser support. -Added Babel Macros – Can be used to import graphQL files which in turn could be transformed for Apollo to consume at runtime. Can use relay Modern, it will run the relay compiler against GraphQL files. Import MDX as a JSX component and can be run in the application. -Sass support and CSS Modules and a lot more! – https://reactjs.org/blog/2018/10/01/create-react-app-v2.html
Day 2 of React Conf, started with talking about how performance is integral to UX. Code Splitting, a concept were instead of sending the whole code in the initial payload, we send what’s needed to render the first screen and later, lazily loading the rest based on subsequent navigation. A most common problem while implementing code splitting can be ‘what do you display to the user if the view hasn’t finished loading?’ Maybe a spinner, loader, placeholder…?? But lot of these degrades the UX. Then came Concurrent React into the picture, Concurrent React can work on multiple tasks at a time and switch between them according to priority. Concurrent React can partially render a tree without committing the result and does not block the main thread.
Two major components of Concurrent React. -Time-slicing -Suspense
Let’s consider a scenario in Synchronous React, if any user event triggers in between the execution of the existing thread, it will wait for the rendering to get completed in a single uninterrupted block. Whereas, in Concurrent React, React is going to pause the current render, switch to complete the user blocking task and resume. So basically, Concurrent React is non blocking, so you can render large amount of data without getting blocked.
Later, learnt about SVG in my favorite talk! –https://twitter.com/UrvikaGola/status/1055878157830504448. i.e Scalable Vector Graphics Instruction on how to draw an image in a markup file. but…why use SVG? Scalable – Scales from small to big without loss of fidelity. Vector based image – File size is smaller as compared to others. Modifiable – Change with CSS and JS.
In React world, to use SVG inline can be done by:- -Importing as a react component. (No HTTP requests cos you are not calling the image) – Convert to JSX (No HTTP requests as well but difficult to update the SVG design)
Apart from all the technical learning there were outdoor activities like paddle boarding organized for conference attendees, board games, lawn games & karaoke etccc!
In the end, a big Thanks to React Conf Team and Facebook for the opportunity – learnt a lot about React, met some great developers and explored a new State! 🙂
Recently attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video (https://lang.video/)is a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.
The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.
Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav Jain and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.
This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.
Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.
KubeCon + CloudNativeCon, North America took place in Austin, Texas from 6th to 8th December. But before that, I stumbled upon this great opportunity by Linux Foundation which would make it possible for me to attend and expand my knowledge about cloud computing, containers and all things cloud native!
I would like to thank the diversity committee members – @michellenoorali , @Kris__Nova, @jessfraz , @evonbuelow and everyone (+Wendy West!!) behind this for making it possible for me and others by going extra miles to achieve the greatest initiative for diversity inclusion. It gave me an opportunity to learn from experts and experience the power of Kubernetes.
After travelling 23+ in flight, I was able to attend the pre-conference sessions on 5th December. The day concluded with amazing Empower Her Evening Event where I met some amazing bunch of people! We had some great discussions and food, Thanks @nutanix!
On 6th December, I was super excited to attend Day 1 of the conference, when I reached at the venue, Austin Convention Center, there was a huge hall with *4100* people talking about all things cloud native!
It started with informational KeyNote by Dan Kohn, the Executive Director of Cloud Native Computing Foundation. He pointed out how CNCF has grown over the year, from having 4 projects in 2016 to 14 projects in 2017. From having 1400 Attendees in March 2017 to 4100 Attendees in December 2017. It was really thrilling to know about the growth and power of Kubernetes, which really inspired me to contribute towards this project.
It was hard to choose what session to attend because there was just so much going on!! I attended sessions mostly which were beginner & intermediate level. Missed out on the ones which required technical expertise I don’t possess, yet! Curious to know more about other tech companies working on, I made sure I visited all sponsor booths and learn what technology they are building. Apart from that they had cool goodies and stickers, the place where people are labelled at sticker-person or non-sticker-person! 😀
There was a diversity luncheon on 7th December, where I had really interesting conversations with people about their challenges and stories related to technology. I made some great friends at the table and thank you for voting my story as the best story of getting into open source & thank you Samsung for sponsoring this event.
KubeCon + CloudNativeCon was a very informative and hugee event put up by Cloud Native Computing Foundation. It was interesting to know how cloud native technologies have expanded along with the growth of community! Thank you the Linux foundation for this experience! 🙂
Last week in Germany, a few miles away from the meeting in COP23 Conference of political leaders & activists to discuss climate there was a bunch, (100 to be exact) of developers and environmentalists participating in Hack4Climate to work on the same global problem – Climate Change.
COP23, Conference of the Parties happens yearly to discuss and plan action about combating climate change, especially the Paris Agreement. This year, it took place in Bonn, Germany which is the home to United Nations Campus. Despite the ongoing efforts by the government, it’s the need of the hour that every single person living on the Earth, contributes at an personal level to fight this problem. After all, we all have, including myself, somehow contributed to the hike in climate change either knowingly or unknowingly. That’s where role of technology comes in. To create a solution by provide pool of resources and correct facts such that everyone can start taking healthy steps.
I will try to put into words explaining all about the thrilling experience Pranav Jain and I had in participating as 2 of the 100 participants selected all over the world earth for Hack4Climate. Pranav was also working closely with Rockstar Recruiting and Hack4Climate team to spread awareness and bring more participants before the actual event. It was a 4 day hackathon which took place in a *cruise* in front of the United Nations Campus. Before the hackathon began we had informative sessions from the delegates of various institutions and organisation like UNFCC – United Nations Framework Convention on Climate Change and MIT Media Lab, IOTA, Ethereum. These sessions helped us all to get more insight into the climate problem from a technical and environmental angle. We focussed on using Distributed Ledger Technology – Blockchain & Open Source which can potentially help to combat climate change.
Pranav Jain and I worked on Green – Low Carbon Diamonds through our solution, Chain4Change. We used blockchain to track the carbon emission in the mining of the mineral particularly, diamond. Our project helps in tracking the process of mining, cutting, polishing for every unique diamond which is available for purchase. It could also certify a carbon offset for each process and help the diamond company improve efficiency and save money. Our objective was to track carbon emission throughout the supply chain where we considered the kind of machine, transport and power being used. The technologies used in our solution are Solidity, Android, Python & Web3JS. We integrated all of them on a single platform.
We wanted to raise awareness among the common customers by putting the numbers (carbon footprint) before them such that they know how much energy and fossils were consumed for the particular mineral. This would help them make a smart and climate friendly and a greener decision during their purchase. After all, our climate is more precious than diamonds.
All project tracks had support from a particular company, who gave more insights and support for data and business model. Our project track was sponsored by EverLedger, a company which believes that transparency is the key to ensure ethical trade. ￼
Everledger’s CEO, Leanne talked about women in technology and swiftly made us realize how we need equal representation of all genders to tackle the global problem. I talked about Outreachy with other female participants and amidst such a diverse set of participants, I felt really connected with a few people I met who were open source contributors. Open source community has always been very warm and fun to interact with. We exchanged what conferences we attended like Fosdem, DebConf and what projects we worked on. Outreachy current round 15 is ongoing however, the applications for the next round 16 of Outreachy internships will open in February 2018 for the May to August 2018 internship round. You can check this link here for more information on projects under Debian and Outreachy. Good luck!
Lastly and most importantly, Thank you Nick Beglinger, (CleanTech21 CEO) and his team who put up this extraordinary event despite the initial challenges and made us all believe that yes we can combat climate change by moving further, faster and together.
Thank you Debian, for always supporting us:)
A few pictures…
Chain4Change Team Members – Pranav Jain, Toshant Sharma, Urvika Gola
On 5th August I got a chance to attend, speak and experienceDebConf 2017 at Montreal, Canada. The conference was ‘stretch’ed from 6 August to 12 August .
Pretty late for me to document my DebConf fun-learning-experiences, thanks to my delaying tactics.. I need to overcome.
But better late than ever, I had amazing time at DebConf. I got to meet and learn from my Outreachy Mentor, Daniel Pocock! 😀
One thing about DebConf I loved was the amount of Diversity in Debian family!
As a beginner, I got to get a big picture of what all projects are there. Daniel helped me a lot in getting started with packaging in Debian. I really appreciate the time he took out to guide me @DebConf and Pranav, remotely.
One specific line I liked about Daniel’s talk on Open Day, 5th August – “Free Communications with Free Software and Debian” while talking about free RTC (Real Time Communication) is that,
..Instead of communication controlling the user, the user can control the communcation..
I talked about free RTC, my Project Lumicall and about my journey being an Outreachy Intern with Debian. I also covered my co-speaker’s project work on Lumicall being a GSoC 2016 student.
Meeting the Outreachy family feels amaazzing! Karen Sandler, executive director of the Software Freedom Conservancy gave a talk on the Significance and Impact of Outreachy and Debian’s support for the programme.
DebConf 2017 has been a wonderful conference with the community being very welcoming and helpful 🙂
I researched on creating a white label version of Lumicall. Few ideas on how the white label build could be used..
Existing SIP providers can use white label version of Lumicall to expand their business and launch SIP client. This would provide a one stop shop for them!!
New SIP clients/developers can use Lumicall white label version to get the underlying working of making encrypted phone calls using SIP protocol, it will help them to focus on other additional functionalities they would like to include.
Documentation for implementing white labelling – Link 1 and Link 2
Since Lumicall is majorly used to make encrypted calls, there was a need to designate quiet times and the phone will not make an audible ringing tone during that time & if the user has multiple SIP accounts, the user can set the silent mode functionality on one of them, maybe, the Work account.
Documentation for adding silent mode feature – Link 1 and Link 2
Adding 9 Patch Image
Using Lumicall, users can send SIP messages across. Just to improve the UI a little, I added a 9 patch image in the message screen. A 9 patch image is created using 9 patch tools and are saved as imagename.9.png . The image will resize itself according to the text length and font size.
Recently, I and my Co – speaker Pranav Jain, got a chance to speak at Open Source Bridge conference which was held in Portland, Oregon!
Pranav talked about GSoC and I talked about Outreachy , together we talked about Free RTC project Lumicall.
OSB conference was much more than just a ‘conference’. More than content in the talks, it had meaning. I am referring to the amazing keynote session by Nicole Sanchez on Tech Reform. She explained wonderfully the need of the hour, i.e Diversity inclusion is not just ‘inclusion’. Focus should be on what comes after the inclusion, Growth.
We also met several Debian developers and Debian mentor for Outreachy (Hoping to meet my mentors someday!! )
Thanks to OSB, I got to meet Outreachy co-ordinator Sarah Sharp! It was wonderful meeting an Outreachy-person! 😀 We talked and exchanged ideas about the programme. and.. she clicked beautiful pictures of us delivering the talk.
The talk ended with an unexpected and very precious hand written note written by Audrey Eschright..
Thank you Debian for giving us a chance to speak at Open Source Bridge and to meet wonderful people in Open Source. ❤
Recently I had the opportunity to be a part of Ultrahack as a mentor which was held in Helsinki, Finland. Ultrahack is a combination of hackathon and startup accelerators. As a mentor, my role was to ensure that each team have best possible chances of fulfilling the evaluation criteria for the contest. I also helped teams with the development and pitching.
It was a very exciting place to brainstorm life changing ideas and convert those ideas into working model. I met so many amazing developers who were building cool stuff. There were a few open source developers and student open source developers like me!
Being a Debian contributor, I spread the what Debian is all about. What makes it the best linux distribution. I talked to students about various programmes like GSoC and Outreachy, Debian participates as a mentoring organisation. I also described my role as a GSoC student under Debian and the free-RTC project I worked under. Many female developers were interested in the Outreachy programme, I described the projects that Debian has currently under the Outreachy programme.
During the hackathon period, I talked to people about the upcoming annual DebConf which takes place. I informed them that they can still apply as a speaker or for diversity bursaries and about the logo-making competition.