\ Can Cobanoglu - Page 2 - A serious tech blog
Can Cobanoglu's Picture

Can Cobanoglu

geek, technologist, life-long learner, kinda musician, phd candidate in computer science, fitness addict, entrepreneur...

14 posts

Nesnelerin İnterneti (IoT) - Nereden Başlamalıyım ?

IoT sıkça duyduğumuz bir kavram. Blog'umu İngilizce yazmaya çalışırken IoT ile ilgili Türkçe bir yazı yazma ihtiyacı duydum. Her konuyu İngilizce araştırıp, çözmeye çalıştığımız için kavramlara da İngilizce aşina oluyoruz maalesef. Bu durumda nasıl Blockchain'i Türkçe'ye çeviremiyorsak IoT'yi de (Internet of Things kısaltması) çevirmeyelim ve onu öyle kabul edelim.

Bu yazıyla aktarmayı düşündüklerim şunlar olacak;

  • Kısaca IoT nedir ?
  • Bir IoT projesi geliştirmek için temelde bilmemiz gereken adımlar nelerdir ?

Nedir IoT ? Yeni bir teknoloji mi ? Bir araç mı ?

Hayır değil.

ref: http://intersog.com/blog/internet-of-things-the-future-of-your-tomorrow/

IoT yani Nesnelerin İnterneti, içinde bulunduğumuz çağı en iyi şekilde anlatan ve geçmişten bugüne kullanılan teknolojilerle beraber değişen yöntemleri, geliştirilen yaklaşımları bir arada barındıran bir gereksinim. Tavuk-yumurta çelişkisine düşmeyelim. Yani IoT olsun istedik de ona göre mi yöntemleri ve teknolojiyi geliştirdik. Bence öyle olmadı. Nikola Tesla elektrikle oynarken sanayi devrimi (2. sanayi devrimi, endüstride petrol ve elektriğin kullanımıyla mümkün olmuş.) yapmak için uğraştığını düşünmüyorum. Açıkçası teknoloji adına atılan her adımın neyi doğuracağını bilmek çok da kolay değil. Bugün Blockchain'in neleri değiştirebileceğini anlamaya çalışıyoruz. 90'lı yılların ortalarında Internet ile beraber popülerliğini gösteren WWW ile de ne yapılabilir diye düşünürken erken ve doğru hareket edenler tarafından yaratılan sosyal uygulamalar (Facebook vb.), e-ticaret siteleri (Amazon, Alibaba vb) bugün halen popülerliğini korusa da yeni bir paradigmaya da elbette hayır demeyiz. Bu konuyu sonra tartışırız.

... Zamanla Nesnelerin İnterneti de kendi başına kutsallığını ilan edecektir.

Nesnelerin İnterneti'ni yapılandırmaya çalışıyoruz çünkü bizi daha sağlıklı mutlu ve güçlü yapmasını umuyoruz. Nesnelerin İnterneti sorunsuz işlemeye başladığında mühendisten çipe çipten veriye indirgenerek, gürleyerek akan bir nehirdeki toprak parçası misali, veri selinde eriyip gidebiliriz.

Yuval Noah Harari - Veri Dini, Homo Deus

Teknik detay vermeden size IoT'nin temel bileşenlerini, varlık göstediği eksenleri aktarmaya çalışacağım. Belki yapı taşlarını ve bir IoT sistemi/platformu için gerekli adımları ve teknolojileri anlattığım başka bir yazı daha yayınlayabilirim. Bu yazıda daha çok sektörel boyutta IoT nasıl varlık gösteriyor mevcut trendlerle birlikte anlatmak istiyorum.

Yatayda ve Dikeyde IoT

Nesnelerin İnterneti iki temel eksende varlığını gösteriyor. İlki Amazon, Google, Thingworx, IBM gibi hemen hemen her sektör için alt yapı sağlayabilen platformların olduğu bir eksen (yatay eksen) var. Burada IoT alt yapı sağlayıcıları herhangi bir sektör için çözüm üretilebilecek gerekli materyalleri ve ortamı sağlamaya çalışmaktadırlar. Ulaşım, tarım, şehircilik/belediyecilik, lojistik, sağlık, elektronik ev aletleri (ve daha fazlası) gibi sektörlerin herbirindeki üreticiler ve hizmet sağlayıcılar kendi ürün ve süreçlerini iyileştirmek ve son kullanıcıları ile iletişim yöntemlerine teknolojik yenilikler/iyileştirmeler getirebilmek adına bu platformları veya daha özelleşmiş olanlarını tercih ediyorlar.

Bu bahsettiğim sektörler için platform sağlayıcıların sunduğu alt yapılar ile herşeyi yapabiliriz. Cihazlarının internete bağlanması, veriyi akıtabilmek ve toplayabilmek, veri üzerinde istatistiksel analiz geliştirebilmek ve uygulamalar geliştirebilmek - özelleşmiş ve sektöre uygun çözümler geliştirmek bu platformlar ile mümkün. Unutmadan bu kadar çabanın nedeni veri. Veri herşeyin ortasında duruyor ve amaç bir yerlerde duran bu verilere ulaşabilmek. Sonrasında işler kolaylaşıyor. Tabi günümüz koşullarında. Eskiden de veri üretiyorduk ama bugünkü kadar değil. Yapılan araştırmalara göre internet üzerinde üretilen verinin 2.5 exabytes olduğu hesaplanmış (2016 istatistiğine göre).

Gartner'ın raporuna göre ise bu senenin (2017) sonuna doğru 8.4 milyar cihaz internete bağlanacak ve veri üretecek. 2020 de ise bu sayı 20 milyarlara çıkacak. Küreselde üretilecek veri miktarını tahmin etmek bile yorucu. Dolayısıyla ortaya çıkan bu muazzam veriyi kullanabilecek sistemlere ihtiyacımız var ve olacak. Bu veriler ne işe yarar ve ne yapabiliriz diye düşünüyorsanız söyleyebilirim. Herşey !

Pekala bir şekilde fikre aşina olduğumuzu varsayalım. Bizim veriyi toplamamız gerekiyor. Bunu nasıl yapabiliriz diye soru sorup cevabını arayalım o zaman. IoT kapsamında bir proje/çözüm üretmek istiyorsam bir platforma ihtiyacım var. Eğer platform üretebilecek yetkinliğim yoksa bu platformu sunan platform sağlayıcılardan satın alabilirim. Bu platformun üzerinde bir uygulama geliştirip ürünlerimi veya süreçlerimi internete bağlı ve akıllı hale getirmek istiyorsam içinde bulunduğum sektöre uygun özelleştirmeleri barındıran bir uygulamayı bu platformun üzerine geliştirebilirim ya da bunu zaten yapmış olan bir uygulama sağlayıcıdan satın alabilirim.

IoT platform/sistem/uygulama sağlayıcıları perspektifinden 2'ye ayrılabilir.

  • Yatayda platform sağlayıcılar (sektörel derinliği olmayanlar)
  • Dikeyde platform/uygulama geliştiriciler (sektörü tanıyanlar ve özelleşmiş taleplere cevap verebilecek olanlar)
İnternete Bağlı Nesneler (Connected Products) ve Operasyonlar (Connected Operations)

IoT kendi içerisinde bir sınıflandırmaya tabi tutulursa birden fazla eksende sınıflandırma yapmak mümkün.

  • Tüketici IoT'si
  • Ticari IoT
  • Endüstriyel IoT

Yukarıdaki maddeler IoT'nin etki ve fayda alanı ile ilgili bir sınıflandırma olabilir. Eğer IoT'yi aşamalı bir süreç olarak ele alırsak hem kullanılan yöntemler hem de teknolojide farklılaşmalar gösterebilir. Ama temelde aşağıdaki aşamaların bütün eksenlerde ve sınıflandırmalarda ortak olması kaçınılmazdır.

  • Veri oluşturabilen cihazlar (IoT devices/hardwares) - Ürünün kendisiyle veya ortamıyla ilgili veri oluşturabilmesi.
  • Cihaz bağlanılabilirliği (Connectivity) - Bu cihazların/sensorlerin internete bağlanabilirliğinin sağlanması
  • Verinin anlık toplaması (Data Capturing) - Oluşturduğu verinin daha anlamlı hale gelebilmesi için bulunduğu yerden istenilen bir platforma akıtılması.
  • Veri işleme (Data Processing) - Akan verinin isteğe göre anlık olarak işlenmesi ve verinin değerlerine göre çeşitli ölçümler yapılması.
  • Veri saklama (Data Storing) - Veri üzerinde analitik çıkarımlar yapabilmek için saklanması
  • Veri analitiği (Data Analytics) - Yapay zeka uygulamaları, büyük veri analizi, ileriye yönelik tahminler gibi yöntemler
  • Uygulamalar (Human value) - Veri üzerinde anlık yapılan işlemler ve analitik işlemlerle beraber insanların/son kullanıcıların sahip oldukları ürünlerle direkt olarak iletişime geçebilmesi.
  • Uygulamalar (Operational value) - Veri üzerindeki anlık işlemler ve analitik ile operasyonların iyileştirilmesi ve üreticilerin endüstriyel faaliyetlerini takip edebileceği uygulamaların geliştirilmesi.
IoT uygulamari neden basarisiz oluyor ?

Başarısızlığın bir çok sebebi var. Genelde IoT ile ilgili başarısızlıklar doğru bir yol haritası ortaya koyamamak ve doğru insanlardan oluşan bir takım kuramamak ile başlıyor. Exosite, IoT projesi geliştirmeyi süreçsel olarak bir çatıya oturtmaya çalışarak başarılı bir IoT projesi geliştirebilmek için gerekli olan doğru adımları anlatan güzel bir whitepaper yayınlamış. Başarıya giden yoldaki önemli adımları şöyle sıralıyor.

  • Explore (Keşfet)
  • Validate (Doğrula)
  • Accelerate (Hız kazandır)
  • Commercialize (Ticarileştir)

İşletmelerin önce IoT uygulamaları ile ürünlerine ve/veya süreçlerine ne gibi katkıları ve faydaları olabileceğini anlamaları gerekmektedir. Eğer müşteri memnuniyetini arttırmak istiyorlarsa ve ürünlerine katma değer sağlamak istiyorlarsa bunu direk son kullanıcının faydalanacağı şekilde konumlandırabileceği gibi süreçlerini iyileştirerek de bunu sağlayabilir.

Bundan sonra uygulanacak adım düşünülen çözümlerin fikirlerin kanıtlanması (proof of concept) evresi olmalıdır. Düşük maliyetlerle hızlı sonuç alınabilecek ufak çaplı geliştirmeler yapılabilir. Bunu geleneksel yazılım geliştirme yöntemleri yerine agile(çevik) geliştirme yöntemlerini tercih etmemizle aynı sebeplere benzetebiliriz. Zararın neresinden dönersek kardır !

Müşterinin satın almasından sonra üretici firmanın müdahalesinin mümkün olmadığı bir süreç başlıyorsa burada risk çok fazla. Connected product dediğimiz internete bağlı cihazlar için ticarileşme sürecinden önce testlerin çok iyi yapılması gerekmektedir.

IoT projesi geliştirebilmek ve başarıya ulaşmak hem teknolojik gelişmeleri hem de işletmelerin uzmanlık alanlarını iyi anlamakla mümkün. Günümüzün şartlarıyla beraber gelen bazı avantajlara sahibiz. Gelişen teknoloji ve yöntemler bizi eskiden aklımızda olmayan yenilikleri düşünmeye zorluyor. Gelişim ve değişim ivmesi çok daha fazla ve bu esneklik çok çabuk dezavantaja dönüşebilir.

Domain-centric Architectures are Cool, but Why ?

Finding the best architecture suited for your culture

I always believe that an organisational culture reflects production culture. I have been working in startup companies throughout my professional career -including my company- since i graduated from the university. And every time, i had a chance to create a team and development culture from the scratch. People who had been experienced in working at a startup company can understand me well. Anyway, you would like to create a team with full of rock stars... Limited funding... you think you had at most two changes to release your product and of course you barely know how to sell it...

There are lots of practices to start a new product. It's absolutely reasonable that you should make a prototype first and proof your idea, then get funding, increase your resources and design + implement your glowing and bubbling product again. Because millions of millions of users are holding their breathes and waiting for your product (am i right ?). So now, you should decide or you have to hire some one to decide the next step.

In the next step, you probably would like to use your wealth to get wealthier than now. Hence, reaching your users at high scale is not as easy as you did before. Spending some money for re-organisation... paying more salaries to hire qualified engineers etc. Right here, right now, if your are a technology oriented company and spending your money for your technology infrastructure and have complex domains. Someone would say that "if we want to redesign our software infrastructure, maintainability, flexibility and scalability should be our key concerns to think about...". And the reasoning should come in; "if we need to construct a well communication flow and integration between all parts of our complex domains, we should also come up with a solution to this with the new architecture."

In this post i would like to help those who are in the next step by trying to explain next generation architecture approaches developed by software gurus.

According to Conway's law, "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure." As it is clearly stated that there is a strong relationship between organizational structures and software, even codebase structures and conventions.

The law is based on the reasoning that in order for a software module to function, multiple authors must communicate frequently with each other. Therefore, the software interface structure of a system will reflect the social boundaries of the organization(s) that produced it, across which communication is more difficult. Conway's law was intended as a valid sociological observation, although sometimes it's taken in a humorous context. (Source : Wikipedia)

Building up a team according to their functionalities or domain knowledge is a strategical decision. There will be another post to discuss functional organisation of a software development team.
Database-centric architecture, Source : pluralsight
The visualisation above shows the relationship between a software architecture layers and functional teams in a traditional organisation.

I have to admit this post includes quotations from Clean Architecture course in Pluralsight which i reviewed in my another post. here.

Traditional Database-centric Architecture

In the database-centric architectures, database is in the center of the organisation. In this approach, UI, business logic and data access depend on the database. It is the essential part of the system naturally. Everything goes around the database. It's worth denoting that i'm not aiming at campaigning to discredit this approach, but there are some facts we cannot ignore.

Database-centric architecture, Source : pluralsight

Disadvantages of Database-centric Approach
  • It was just designed to serve single type of presentation layered applications. I mean before smart phones comes into play.
  • It is not flexible and agile. You cannot move the essential parts because the parts which you want to change are stick to the layer up or down.
  • All dependencies are towards the database.

Domain-centric Architectures

In domain-centric architectures the domain and use cases are essential, presentation and databases are just a detail.

Domain-centric architecture, Source : pluralsight

All dependencies point towards the domain. There are three well known domain-centric architectures. One of them is "hexagonal architecture" from Alistair Cockburn, It is layered architecture model that application is at the center of the system. It is a plugin architecture which includes ports and adapters.

Next with the onion architecture, Jeffrey Palermo. This is also a layered architecture with the domain at the center surrounded by the application layer. The outer layers consist of a thin UI as a presentation layer and an infrastructure layer which includes persistence layer. In addition all dependencies point towards the center of the architecture. There is no inner layer knows about any outer layer

In this architecture, we can test application without UI and database dependencies.

Database-centric architecture, Source : pluralsight

Finally author mentions about Clean Architecture from Uncle Bob here.

In this architecture, entities are at the center surrounded by the application that is the use cases. The outer layer consists of ports and adapters adapting the application core to the external dependencies via controllers and gateways and presenters. The little illustration at the bottom right of the image below is Ivan Jacobson's BCE architecture pattern to explain how the presentation layer and application layer should be wired up.

These three architectures are focusing to come up with the same solution. The fundamental issues in traditional architectures like tight coupling, separation of concerns are all addressed by them pragmatically. As you've already known that traditional MVC like architecture is one of the most used and accepted approach in industrial applications. But things are changing and visionaries are aware that designing softwares that UI - business logic and business logic - persistence layer are tightly coupled is not a good way of designing software, because of maintainability and flexibility issues.

Technology is always changing because of the real world needs so, successful change management becomes increasingly important subject for businesses. For instance, What happens in the case of replacing your data access system from relational database to any NoSQL data storing system or adding another NoSQL ? What if you want to change your caching storage system from one to another ? Long story short, this shouldn't be a pain in the ass on your every attempt for a change.

Adopting Continuous Delivery Practices to Increase Efficiency: A Case Study - Part 2

Fig. 3. Illustration of Lojika’s Deployment Pipeline Fig. 3. Illustration of Lojika’s Deployment Pipeline

Coping With Challenges

Lojika also known as ”Lojika Field Labs” is an R&D company which has more than enough national and EU granted innovation projects. By its nature, research and development process of innovation projects contains so many ambiguities. Lojika is a company which gathers information directly from different fields around the world for its innovation projects. Due to the fact that Lojika is trying to solve the society related issues on the subject of car-pooling, Physical Internet, sharing economy and crowd-sourcing, embracing changes becomes inevitable necessity for the company culture to succeed. Some of the challenges we have faced are:

  • Clarification of ambiguities regarding the needs of customers
  • Trying to adapt to rapidly changing environment
  • Fast evaluation and implementation of the field feedback for fast responding
  • Minimizing analyze, develop, test and deploy cycle delay
  • Increasing participation in all phases from the beginning of evaluation the feedback to the release time
  • To ensure transparency

As to success in changing environments, selecting a proper software development approach becomes a key factor to adopt changes quickly, deliver new idea and change requests without less doubt and efforts. Right at this moment, continuous delivery practices come into play. Continuous delivery makes it possible to continuously adapt software in line with user feedback, shifts in the market and changes of business strategy. Testing, support, development and operations work together as one delivery team to automate and streamline the build, test and release process [15].

As it is illustrated in figure 2, Lojika’s communication flow is composed of three main processes. Field study is the best step to understand users and identify their needs additionally new ideas emerge during this process, which marketing team leads besides product and development team are the important part of it. Analyzing the Feedback is another main process of the communication flow which product team is the responsible for distilling the gathered information from the field. The last step of the communication flow and the subject of this study, Delivery Pipeline, which is the baking step of the whole flow. That means development is just a part of the continuous delivery. In this process, information coming from the previous step goes into the baking pipeline. That is to say that baking pipeline symbolizes development or build pipeline which is the most significant part of the continuous delivery. Some feedback are implemented in a build which is also a release candidate; therefore, ready for deployment to production. This is the heart of continuous delivery and eventually this approach makes the software always ready for every build baked in the deployment pipeline. To sum up, continuous delivery is a vital part of building up a continuous communication cycle between stakeholders including users in the field.

Taking into consideration of these which are given above regarding Lojika’s culture and challenges, a tailored version of a deployment pipeline is described in the next section.

Building the Deployment Pipeline

The deployment or build pipeline is a high level visualization of connected jobs which compose the entire pipeline. All separated jobs have their own responsibilities and they are connected to each other as upstream and downstream which means that each job can trigger its downstream jobs and transfer any information to them based on its success or fail. Considering the fundamental principles and practices, with the use of some tools and technologies, it is possible to build and operate a successful deployment pipeline

Creating a delivery workflow for the company is more important than choosing tools and technology. Nevertheless, technologies and tools used for the pipeline are described in table I along with their explanations.

As shown in Figure 3, each step is numbered one by one. In
the next part of the study, each working phase in the workflow defined for Lojika’s TAG project is explained in detail. Along with the difficulties encountered, the technologies and tools used to accomplish each step and the steps taken to solve them are explained.

A. Development Phase

Everything starts after feedback is received. Once the analysis work on the feedback is completed, it is transferred to the development phase. Developers complete their work in their own development environment and then submit them for the review. If the review is positive, it is merged into the develop branch. It is now ready to be sent to the continuous integration server. Developers collaborate with Git and the source code is evaluated through Github.

#A.2 is responsible for transition from development to continuous integration. At this stage, changes to Github are
pulled by Jenkins, one of the most well-known continuous
integration applications. Task #A.2 is also the first task to run
the deployment pipeline.

B. Continuous Integration Phase

#B.1: Building process of the last version of source code starts. If the compilation process fails, then developers are informed about this failure using Jenkins email plugin. Otherwise pipeline goes on.

#B.2: Run automated unit and component tests in the same job. If any of the tests fail, developers are notified by e-mail. If there is no problem, the next process starts.

#B.3: SonarQube is used for quality measurements of source code. With Jenkins’ SonarQube integration, source code is made necessary for quality measurements. If the quality values are below the specified threshold values, the process will not continue and the developers will be informed.

#B.4: Artifacts (Jar files, metadata etc.) created for later use in the case where everything is complete and error-free is uploaded to the JFrog artifact repository.

#B.5: Both low-level documentation (JavaDoc etc.) and REST API documentation required for client developers are automatically generated. Swagger is preferred to create REST API documentation.

#B.6: Lojika’s development architecture is multi-layered. That is, both the infrastructure and the frontend development
teams are positioned separately. The client team that needs to
work with the infrastructure to be able to access these services
with minimum effort so that they can run smoothly. In this step,
a Docker image is created that the client developers can use
in their local development environments. At this point, client
developers can pull the desired version of the backend services
to their machine at any time.

#B.7: After each step in this process is successfully completed, both backend and client developers will be informed.

#B.8: The same build is used for each phase transition. This means that if you want to design a successful pipeline,
as mentioned in the previous chapters, the ”Only Build Your
Binaries Once” rule must be followed. So, the artifact created
in step B.1 itself is transferred by the trigger in step B.8 to
the next ”Testing Phase”.

C. Testing Phase

Some steps are progressing automatically in testing phase, but in some cases manual intervention is required. This stage can be performed completely automatically or fully manual. But at the end of the day each build that comes over the pipeline must be approved by a QA role and decided that it is ready for the next phase.

#C.1: The build on the pipeline is deployed on the test servers. This is done with plugins provided by Jenkins.

#C.2: At this time, all kinds of database changes are applied using Liquibase. Managing database versioning and changes is one of our main challenges. We have also automated the management of database changes, such as automating all the work that will disrupt the pipeline and cause waste of time.

#C.3: Writing tests for every case and each scenario is a very ingenious business. In general, tests are written from the most important and priority scenarios to the most insignificant ones. It is a fact that time can not be allocated to risky and important tests due to the time spent for unnecessary tests. Therefore, automating tests that are relatively easy to automate will be strategically more reasonable. This is also Lojika’s strategy. Scenarios that are easy to automate and do not have a lot of integration with external systems are run by Appium for both iOS and Android clients. On this side, more time can be allocated for risky and dependent scenarios.

#C.4: The risks, uncertainties and challenges in this step are very different to the previous one. Non-functional tests, however, need to be fully automated to ensure continuity and sustainability. In particular, the performance, load and stress tests of the technically risky parts, which may be overloaded by the users, are automatically performed by Jmeter and Blazemeter.

#C.5: As already mentioned, it is absolutely necessary to be approved by QA so that this phase can be successfully completed. Thanks to the Jenkins promotion plugin, the person who is responsible for promotion promotes the current build to the next stage of the pipeline.

D. Release Phase

#D.1: The same artifact that completes each stage successfully is ready to go to production-like environment this time. The purpose of this step is to create a simulation of the real environment using a production-like environment before the release.

#D.2: The same operation as in C.2 is applied.

#D.3: Some automation again. Although the artifact remains unchanged and everything is positive during the testing phase, it is a good idea to automatically test specific scenarios the last time.

#D.4: Continuous delivery without manual intervention is not possible. The good thing is that an artifact that comes up
to this stage is ready to release. Without any rule, everyone
involved uses this last build with realistic scenarios for the last time before the release.

#D.5: There is no rule that every artifact should be released. According to the pipeline logic, every promoted artifact can be released at any time. Depending on the release strategy and conditions, the release decision may be delayed or an old artifact may be chosen to release. It is to be remembered that with continuous delivery, each artifact is expected to be a ”release candidate”. But whatever it is, there must be a role to make a release decision anyway.

#D.6: After the promotion decision, the artifact is deployed to the production environment and stakeholders related to this process are informed about the changes.

Results

In Lojika, continuous delivery has been successfully implemented and 2070 builds have been built up to now. 827 of them have been successfully uploaded to test servers. 229 of them were uploaded to the Staging server. 101 of them passed through production.

Until now, unit and component tests have been run more than 620,000 times on average.

In addition to numerical improvements, cultural and organizational improvements have been observed as well;

  • Quick response to user feedback
  • Improved communication between both product and software development teams from start to end
  • Increase in company-wide participation
  • Reduction of hot-fixes
  • Ease of managing database changes in any environment
  • Improved software quality

Conclusion and Future Work

This study attempts to explain the benefits of continuous delivery, which is part of agile discipline, on a case study. We discussed that the company can successfully implement a continuous delivery workflow in line with its own needs and improve product and software development processes at this point.

Obviously, this study promises that continuous delivery practices can be applied not only to large institutions, but also to startups.

Moreover, some steps need to be improved in relation to the workflow in figure 3. In particular, non-functional tests, for instance, stress and security tests needs to be improved. The more effective use of Docker is in our future work plans. By activating the use of Docker we aim to make deployments more efficient and effective.

References

[1] J. Humble. (2016) The case for continuous delivery. [Online]. Available: https://www.thoughtworks.com/insights/blog/case-continuous-delivery

[2] A. Morrison and B. Parker. (2016) An aerospace industry

cios move toward devops and a test-driven development envi-
ronment. [Online]. Available: http://www.pwc.com/us/en/technology-
forecast/2013/issue2/interviews/interview-ken-venner.html

[3] T. Z. . A. Walgren. (2016) Huaweis cd transformation journey. [Online]. Available: https://www.youtube.com/watch?v=xsjsWU23k80

[4] M. Pais. (2016) Continuous delivery stories. [Online].

Available: http://www.infoq.com/resource/minibooks/emag-continuous-delivery-stories/en/pdf/Continous-Delivery-Stories-eMag.pdf

[5] Wikipedia. (2016) Systems development life cycle. [Online]. Available: https://en.wikipedia.org/wiki/Systems

[6] D. F. Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison Wesley, 2011.

[7] C. L. Juha Itkonen, “Perceived benefits of adopting continuous deliv- ery practices,” 10th International Symposium on Empirical Software Engineering and Measurement (ESEM). Ciudad Real, Spain: ACM, 2016.

[8] K. Morris. (2016) Continuous delivery vs. traditional agile. [Online]. Available: https://dzone.com/articles/continuous-delivery-vs

[9] M. Fowler. (2016) Continuousdelivery. [Online]. Available: http://martinfowler.com/bliki/ContinuousDelivery.html

[10] K. Beck, Extreme programming explained : embrace change,ser. The XP Series. Boston, San Francisco, Paris: Addison-Wesley, 2000, autres tirages : 2001, 2004. [Online]. Available: http://opac.inria.fr/record=b1098028

[11] B. Fitzgerald and K.-J. Stol, “Continuous software engineering: A roadmap and agenda,” Journal of Systems and Software, no. 25, pp.19–59, 8 2015.

[12] C. Tozzi. (2016) Continuous integration vs. continuous delivery: Theres an important difference. [Online]. Available: https://devops.com/continuous-integration-vs-continuous-delivery-theres-important-difference/

[13] L. Chen, “Continuous delivery huge benefits, but challenges too,” IEEE Software, 3 2015.

[14] J. Allen. (2016) Patterns for continuous delivery. [Online]. Available: https://www.infoq.com/articles/Continous-Delivery-Patterns

[15] N. Bozic. (2016) Continuous delivery through pipelines.

[Online]. Available: https://www.go.cd/2015/12/28/gocd-continuous-delivery-through-pipelines/

[16] (2016) Jenkins. [Online]. Available: https://jenkins.io/

[17] (2016) Appium introduction. [Online]. Available: http://appium.io/introduction.html

[18] (2016) Apache jmeter. [Online]. Available: http://jmeter.apache.org/

[19] (2016) Blazemeter. [Online]. Available: https://www.blazemeter.com/

[20] (2016) Sonarqube. [Online]. Available: https://en.wikipedia.org/wiki/SonarQube

[21] (2016) Jfrog. [Online]. Available: https://www.jfrog.com/

[22] (2016) Liquibase db change management. [Online]. Available: http://www.liquibase.org/

[23] (2016) Docker. [Online]. Available: https://www.docker.com/

Adopting Continuous Delivery Practices to Increase Efficiency: A Case Study - Part 1

As a result of my work at Lojika, i've prepared a conference paper that explains a high level end to end solution framework of Lojika's software/product development. In this paper, i denoted and described fundamental approaches and various technologies to clarify how we reached the optimal level of responsiveness and effectiveness.

I divided this whole study into 2 parts to make audience not to bore. First post is all about the introduction stuff and the second part will be elaborated by explaining flaws and bottlenecks of communication flow in Lojika. Some methods / technologies to resolve these problems are also described. Furthermore, the most important part is about how we've applied continuous delivery practices / principles of agile software development to overcome all those problems in the course of my work.

You can also download this paper.

Abstract—Consistency, responsiveness and reliability are some of common issues for companies in online business that deliver value for their customers. They need to be able to bright out new ideas and changes to production without minimum technical errors. As an agile development methodology, continuous delivery presents best practices for creating reasonable and reliable builds without any special effort. No doubt about that this brings along efficiency and effectiveness. The aim of this paper is to investigate observed effects of continuous delivery on a startup company, Lojika. Due to the importance of it’s early users contribution, well-structured feedback and communication mechanism is a must requirement in order to communicate the feedback throughout the teams including product, development and marketing as quickly as possible. For that reason, probing into the development life cycle of Lojika will be very helpful to enhance the concurrent approaches with new case studies.

Keywords—case study, continuous delivery, continuous communication, build pipeline, agile development, automated testing,devops.

Introduction

Rapidity, in the new era of software development is a significant necessity in the dynamic, fast growing and changing markets. Hot companies like Google, Facebook, Uber, Airbnb, Tesla are able to react changes quickly, This makes them serious competitors in the market. Many Google services see releases multiple times a week and Facebook releases to production twice a day [1]. Even in aerospace industry, responsiveness and rapid changing is an emerging factor to become more effective and efficient in their business. SpaceX, a space transport company founded in 2002, has been succeeded in integrating agile development practices into their development processes. Kevin Venner, CIO of SpaceX expresses that ”we release at least one a week, sometimes midweek” [2].

Continuous delivery is not just for startups and lean organizations. Large scale businesses and enterprises can be guided by agile practices to adopt agility on their development processes in an effective and efficient way. Huawei is one of the enterprises which have been successful in applying continuous delivery agile practices. Huawei is $40B company delivering communications technologies for telecom carriers, enterprise and consumers. Analytics regarding R&D of Huawei are tremendous; working 2000 developers worldwide, has 1000 applications, more than 2000 releases per day, more than 100.000 compile&builds per day, more than 1 million test cases run per day and so on. [3]. Another success story is about HP LazerJet firmware team. They build the firmware and the operating system that HP LazerJet printers run on. After they discovered the slowness and ineffectiveness in their operations, they revealed that ten percent of the team were spending their time on code integration. Additionally, other process were more time consuming than it should be. They re-architected their entire system from the ground. Code integration mechanism was made by continuous integration servers and they put a large amount of automated testing including both unit tests and 30.000 automated functional tests in place. After rebuilding everything, they report that their time on continuous integration is 2% and they spend 40% of their time to build new feature [4].

Despite the fact that some specific differences between all types of software development life cycles (SDLC), the most known phases are requirements gathering and planning, designing and development, testing and deployment [5]. Unlike the others, in conjunction with increasing value of the agile principles, ”efficiency in delivering software” becomes the main focus amongst them. Considering fundamental principles in agile manifesto: ”Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”[6]. The main purpose of continuous delivery is to reduce the risks associated with delivering new versions and increasing feedback and improving collaboration between development, testing and operational responsible for delivery.

The continuous delivery and deployment practices have been proposed to enable accelerated value delivery, reduce the risk of failure, increase productivity, improve visibility, feed- back and quality [7]. In many organizations, release frequency is measured in weeks or months, and the release process is certainly not repeatable or reliable. It is manual and often requires a team of people to deploy the software even into a testing or staging environment [6].

In this study, we introduce an implementation of practices of continuous delivery in a car-pooling application TAG (Tek Araba Gidelim) which is developed by Lojika. We investigate observed effects of continuous delivery on our communication and feedback structure within the organization consisting of our product, development and marketing team which are all involved in our continuous delivery process. Rest of the paper is organized below:

In the next section, some important details of continuous delivery practices are described to understand what it must be considered to apply it successfully. In the third section, the reason of why Lojika prefers continuous delivery practices rather than other approaches is described. In the forth section of the paper, implementation details of a successful adoption of continuous delivery approach within Lojika are introduced. The results of the study are presented in section 5. Section 6 concludes our study by summing up all sections and results.

Understanding Continuous Delivery

Continuous delivery is accepted as a subset of agile which the software is ready for release while development is continuing, which means that there should be no extra effort to make reasonable builds and that sets continuous delivery apart from ”traditional agile” [8].

Another well known definition of continuous delivery; ”a software development discipline where you build software in such a way that the software can be released to production at any time” [9].

In essence, whole story begins with continuous integration. Firstly, ”integration” needs to be clarified. Development in a team is a kind of process that has to contain the integration activity on every changes throughout the development. Integration phase is unpredictable and can take more time than the programming period [10]. A good rule of thumb and de-facto best practice to sustain integration activities of developed components are to use source control and configuration management tools such as Git, Mercurial or Subversion. As illustrated in figure 1, code pushing phase is the first attempt to trigger the pipeline from the commit stage by developers, the commit stage is automatically triggered process comprising interconnected steps such as compiling code, running tests (unit, acceptance and non-functional performance tests etc), code coverage and building artifacts. Whenever a failure occurs in the automatic process, it may help to ensure that problems leading to integration failures are solved as quickly as possible by those responsible[11]. Continuous integration is primarily focused on asserting that the code compiles successfully and passes unit and acceptance tests. However; it is not enough to create a continuous delivery process. [6]. Continuous delivery is more complicated and is about the processes that have to happen after code is integrated for application changes to be delivered to users. Setting up a continuous integration tool such as Jenkins or Bamboo to automate applying code changes continuously does not indicate that continuous delivery is performed. It is just limited to use continuous integration server[12].

Nevertheless; continuous integration is a first prerequisite to step up to continuous delivery, that is to say that applying the whole commit stage automatically is the inception process of the continuous delivery. Not only that but also acceptance test stage, non-functional validation stage, manual test stage and release stage must be considered to achieve a success in continuous delivery. These stages are enabled through ”the deployment pipeline”[6].

The commit stage asserts that the system works in low technical level and validate the rules that pass to be accepted that code base is stable [6]. This stage is the very early stage to provide quick feedback to developers. Developers check in code to the source control management system and continuous integration server (CI server) automatically polls the changes from source control management system. CI server compiles the source code and executes unit and integration (component) tests. Code analysis process is performed to check the quality of the code base. In case of an error in this stage, the pipeline stops and notifies the responsible developers. Otherwise, if everything goes well, the generated artifacts, reports or metadata that will be used later in the pipeline are uploaded to artifact repository server and the pipeline step up to the next stage [13]. This transition between these steps has to be done automatically. As it is illustrated in figure 1, to sustain the feedback cycle after any failure, delivery team has to be informed about the failure. Otherwise, in case of passing successfully the build & test stage, developer team should proceed their coding activities. But this does not mean that developers are done of their work. In every stage of the delivery pipeline, it is possible to get negative feedback by QA team.

According to Humble and Farley’s book[6], there are essential principles and practices to perform an effective commit stage;

• Provide fast and useful feedback. Fail fast to identify errors early. In order to do that it is extremely required to put as many unit and component tests as possible

• What should break the commit stage. Even if everything goes well and commit stage is successful, there may be something wrong with the code base quality. There should be a reasonable threshold to achieve a level that the code base is acceptable

• Tend the commit stage carefully

• Give developers ownership

• Use a build master for very large teams

The acceptance test stage is the next automatically executed stage, it is extremely valuable and it can be very expensive to create and maintain. In many organizations, acceptance testing is done by a separate dedicated team, but this strategy can cause a misunderstanding that developers cannot feel ownership and responsibility of the acceptance stage of the product. Instead of manual testing, performing an automatic test process provides quite improvements on communication structure, reliability and responsiveness:

• Manual testing takes a long time and expensive to perform, but executing acceptance tests automatically is a time saving activity. This also helps teams to focus more on critical issues rather than doing less important and repeatable tasks.

• Automating acceptance tests protects application when large-scale changes are made.

• Automating acceptance tests reduces productivity, predictability and reliability

• By automating the process, whole team owns the acceptance tests[6]

As we discussed earlier, the aim of continuous delivery is to be able to deliver software frequently and essential building blocks of a successful implementation of the deployment pipeline are given below:

• Automate everything

• Keep everything in source control

• Build quality in

• Only build your binaries once

• Deploy the same way to every environment

• Smoke-test your deployments

• Deploy into a copy of production

• The process for releasing/deploying software must be repeatable and reliable

• If something is difficult or painful, do it more often

• Done means released

• Everybody has responsibility for the release process

• Improve continuously

Continuous delivery is available for all types of companies. But there may be some procedural differences from the company to the company. Depending on it’s best practices and basic principles, companies can use tailored and differentiated models for themselves [14].

This is the end of the first part of my study. You can continue reading part 2 .

References

[1] J. Humble. (2016) The case for continuous delivery. [Online]. Available: https://www.thoughtworks.com/insights/blog/case-continuous-delivery

[2] A. Morrison and B. Parker. (2016) An aerospace industry

cios move toward devops and a test-driven development envi-
ronment. [Online]. Available: http://www.pwc.com/us/en/technology-
forecast/2013/issue2/interviews/interview-ken-venner.html

[3] T. Z. . A. Walgren. (2016) Huaweis cd transformation journey. [Online]. Available: https://www.youtube.com/watch?v=xsjsWU23k80

[4] M. Pais. (2016) Continuous delivery stories. [Online].

Available: http://www.infoq.com/resource/minibooks/emag-continuous-delivery-stories/en/pdf/Continous-Delivery-Stories-eMag.pdf

[5] Wikipedia. (2016) Systems development life cycle. [Online]. Available: https://en.wikipedia.org/wiki/Systems

[6] D. F. Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison Wesley, 2011.

[7] C. L. Juha Itkonen, “Perceived benefits of adopting continuous deliv- ery practices,” 10th International Symposium on Empirical Software Engineering and Measurement (ESEM). Ciudad Real, Spain: ACM, 2016.

[8] K. Morris. (2016) Continuous delivery vs. traditional agile. [Online]. Available: https://dzone.com/articles/continuous-delivery-vs

[9] M. Fowler. (2016) Continuousdelivery. [Online]. Available: http://martinfowler.com/bliki/ContinuousDelivery.html

[10] K. Beck, Extreme programming explained : embrace change,ser. The XP Series. Boston, San Francisco, Paris: Addison-Wesley, 2000, autres tirages : 2001, 2004. [Online]. Available: http://opac.inria.fr/record=b1098028

[11] B. Fitzgerald and K.-J. Stol, “Continuous software engineering: A roadmap and agenda,” Journal of Systems and Software, no. 25, pp.19–59, 8 2015.

[12] C. Tozzi. (2016) Continuous integration vs. continuous delivery: Theres an important difference. [Online]. Available: https://devops.com/continuous-integration-vs-continuous-delivery-theres-important-difference/

[13] L. Chen, “Continuous delivery huge benefits, but challenges too,” IEEE Software, 3 2015.

[14] J. Allen. (2016) Patterns for continuous delivery. [Online]. Available: https://www.infoq.com/articles/Continous-Delivery-Patterns

[15] N. Bozic. (2016) Continuous delivery through pipelines.

[Online]. Available: https://www.go.cd/2015/12/28/gocd-continuous-delivery-through-pipelines/

[16] (2016) Jenkins. [Online]. Available: https://jenkins.io/

[17] (2016) Appium introduction. [Online]. Available: http://appium.io/introduction.html

[18] (2016) Apache jmeter. [Online]. Available: http://jmeter.apache.org/

[19] (2016) Blazemeter. [Online]. Available: https://www.blazemeter.com/

[20] (2016) Sonarqube. [Online]. Available: https://en.wikipedia.org/wiki/SonarQube

[21] (2016) Jfrog. [Online]. Available: https://www.jfrog.com/

[22] (2016) Liquibase db change management. [Online]. Available: http://www.liquibase.org/

[23] (2016) Docker. [Online]. Available: https://www.docker.com/

Awesome Resources for Microservices and Scalability

The purpose of this post is just for wrapping up the best (at least above the mediocre ones) resources that i found on the internet (websites, blogs, books etc.). And i put effort into splitting them into parts according to the specific subjects related to scalability issues and mostly Microservice Architecture principles. I will be updating this post as much as possible.

Understanding the Fundamentals and More

Tools & Architectural Implementation
Microservice Orchestration
For Java & Spring Folk

Useful Readings on Scalability

Useful Books*

  • Reactive Microservices Architecture by Jonas Boner (mini-free ebook only 54 pages, i strongly suggest you to read it.)
  • Building Microservices by Sam Newman.
  • Developing Reactive Microservices by Markus Eisele, Enterprise Advocate, Lightbend, Inc.
  • Domain-Driven Design: Tackling Complexity in the Heart of Software (It is strongly suggested one in the ecosystem of Microservices to understand domain based thinking)
  • Patterns of Enterprise Application Architecture by Martin Fowler

* Books are not ordered by the importance.

Very Useful Resources Given in Jonas Boner's Book, Reactive Microservices Architecture
  • For an insightful discussion on the problems caused by a mutable state, see John Backus’ classic Turing Award Lecture “Can Programming Be Liberated from the von Neumann Style?

  • Neil Gunter’s Universal Scalability Law is an essential tool in understanding the effects of contention and coordination in concurrent and distributed systems.

  • For a discussion on the use of bulkheads in ship construction, see the Wikipedia page https://en.wikipedia.org/wiki/Bulkhead_(partition).

  • For an in-depth analysis of what made Titanic sink see the article “Causes and Effects of the Rapid Sinking of the Titanic.”

  • Process (service) supervision is a construct for managing failure used in Actor languages (like Erlang) and libraries (like Akka). Supervisor hierarchies is a pattern where the processes (or actors/services) are organized in a hierarchical fashion where the parent process is supervising its subordinates. For a detailed discussion on this pattern see “Supervision and Monitoring.”

  • Our definition of a promise is taken from the chapter “Promise Theory” from Thinking in Promises by Mark Burgess (O’Reilly), which is a very helpful tool in modeling and understanding reality in decentralized and collaborative systems. It shows us that by letting go and embracing uncertainty we get on the path towards greater certainty.

  • The Unix philosophy is captured really well in the classic book The Art of Unix Programming by Eric Steven Raymond (Pearson Education, Inc.).

  • For an in-depth discussion on the Single Responsibility Principle see Robert C. Martin’s website “The Principles of Object Oriented Design.”

  • Visit Martin Fowler’s website For more information on how to use the Bounded Context and Ubiquitous Language modeling tools.

  • See Jay Kreps’ epic article “The Log: What every software engineer should know about real-time data’s unifying abstraction.”

  • Martin Fowler has done a couple of good write-ups on Event Sourcing and CQRS.

  • The quote is taken from Pat Helland’s insightful paper “Immutability Changes Everything.”

  • As brilliantly explained by Joel Spolsky in his classic piece “The Law of Leaky Abstractions.”

  • The fallacies of RPC has not been better explained than in Steve Vinoski’s “Convenience over Correctness.”

  • We are using Tyler Akidau’s definition of streaming, “A type of data processing engine that is designed with infinite data sets in mind” from his article “The world beyond batch: Streaming 101.”

  • Location Transparency is an extremely important but very often ignored and under-appreciated principle. The best definition of it can be found in the glossary of the Reactive Manifesto — which also puts it in context: http://www.reactivemanifesto.org/glossary#Location-Transparency.