Can Cobanoglu's Picture

Can Cobanoglu

geek, technologist, life-long learner, kinda musician, phd candidate in computer science, fitness addict, entrepreneur...

12 posts

Adopting Continuous Delivery Practices to Increase Efficiency: A Case Study - Part 2

Fig. 3. Illustration of Lojika’s Deployment Pipeline Fig. 3. Illustration of Lojika’s Deployment Pipeline

Coping With Challenges

Lojika also known as ”Lojika Field Labs” is an R&D company which has more than enough national and EU granted innovation projects. By its nature, research and development process of innovation projects contains so many ambiguities. Lojika is a company which gathers information directly from different fields around the world for its innovation projects. Due to the fact that Lojika is trying to solve the society related issues on the subject of car-pooling, Physical Internet, sharing economy and crowd-sourcing, embracing changes becomes inevitable necessity for the company culture to succeed. Some of the challenges we have faced are:

  • Clarification of ambiguities regarding the needs of customers
  • Trying to adapt to rapidly changing environment
  • Fast evaluation and implementation of the field feedback for fast responding
  • Minimizing analyze, develop, test and deploy cycle delay
  • Increasing participation in all phases from the beginning of evaluation the feedback to the release time
  • To ensure transparency

As to success in changing environments, selecting a proper software development approach becomes a key factor to adopt changes quickly, deliver new idea and change requests without less doubt and efforts. Right at this moment, continuous delivery practices come into play. Continuous delivery makes it possible to continuously adapt software in line with user feedback, shifts in the market and changes of business strategy. Testing, support, development and operations work together as one delivery team to automate and streamline the build, test and release process [15].

As it is illustrated in figure 2, Lojika’s communication flow is composed of three main processes. Field study is the best step to understand users and identify their needs additionally new ideas emerge during this process, which marketing team leads besides product and development team are the important part of it. Analyzing the Feedback is another main process of the communication flow which product team is the responsible for distilling the gathered information from the field. The last step of the communication flow and the subject of this study, Delivery Pipeline, which is the baking step of the whole flow. That means development is just a part of the continuous delivery. In this process, information coming from the previous step goes into the baking pipeline. That is to say that baking pipeline symbolizes development or build pipeline which is the most significant part of the continuous delivery. Some feedback are implemented in a build which is also a release candidate; therefore, ready for deployment to production. This is the heart of continuous delivery and eventually this approach makes the software always ready for every build baked in the deployment pipeline. To sum up, continuous delivery is a vital part of building up a continuous communication cycle between stakeholders including users in the field.

Taking into consideration of these which are given above regarding Lojika’s culture and challenges, a tailored version of a deployment pipeline is described in the next section.

Building the Deployment Pipeline

The deployment or build pipeline is a high level visualization of connected jobs which compose the entire pipeline. All separated jobs have their own responsibilities and they are connected to each other as upstream and downstream which means that each job can trigger its downstream jobs and transfer any information to them based on its success or fail. Considering the fundamental principles and practices, with the use of some tools and technologies, it is possible to build and operate a successful deployment pipeline

Creating a delivery workflow for the company is more important than choosing tools and technology. Nevertheless, technologies and tools used for the pipeline are described in table I along with their explanations.

As shown in Figure 3, each step is numbered one by one. In
the next part of the study, each working phase in the workflow defined for Lojika’s TAG project is explained in detail. Along with the difficulties encountered, the technologies and tools used to accomplish each step and the steps taken to solve them are explained.

A. Development Phase

Everything starts after feedback is received. Once the analysis work on the feedback is completed, it is transferred to the development phase. Developers complete their work in their own development environment and then submit them for the review. If the review is positive, it is merged into the develop branch. It is now ready to be sent to the continuous integration server. Developers collaborate with Git and the source code is evaluated through Github.

#A.2 is responsible for transition from development to continuous integration. At this stage, changes to Github are
pulled by Jenkins, one of the most well-known continuous
integration applications. Task #A.2 is also the first task to run
the deployment pipeline.

B. Continuous Integration Phase

#B.1: Building process of the last version of source code starts. If the compilation process fails, then developers are informed about this failure using Jenkins email plugin. Otherwise pipeline goes on.

#B.2: Run automated unit and component tests in the same job. If any of the tests fail, developers are notified by e-mail. If there is no problem, the next process starts.

#B.3: SonarQube is used for quality measurements of source code. With Jenkins’ SonarQube integration, source code is made necessary for quality measurements. If the quality values are below the specified threshold values, the process will not continue and the developers will be informed.

#B.4: Artifacts (Jar files, metadata etc.) created for later use in the case where everything is complete and error-free is uploaded to the JFrog artifact repository.

#B.5: Both low-level documentation (JavaDoc etc.) and REST API documentation required for client developers are automatically generated. Swagger is preferred to create REST API documentation.

#B.6: Lojika’s development architecture is multi-layered. That is, both the infrastructure and the frontend development
teams are positioned separately. The client team that needs to
work with the infrastructure to be able to access these services
with minimum effort so that they can run smoothly. In this step,
a Docker image is created that the client developers can use
in their local development environments. At this point, client
developers can pull the desired version of the backend services
to their machine at any time.

#B.7: After each step in this process is successfully completed, both backend and client developers will be informed.

#B.8: The same build is used for each phase transition. This means that if you want to design a successful pipeline,
as mentioned in the previous chapters, the ”Only Build Your
Binaries Once” rule must be followed. So, the artifact created
in step B.1 itself is transferred by the trigger in step B.8 to
the next ”Testing Phase”.

C. Testing Phase

Some steps are progressing automatically in testing phase, but in some cases manual intervention is required. This stage can be performed completely automatically or fully manual. But at the end of the day each build that comes over the pipeline must be approved by a QA role and decided that it is ready for the next phase.

#C.1: The build on the pipeline is deployed on the test servers. This is done with plugins provided by Jenkins.

#C.2: At this time, all kinds of database changes are applied using Liquibase. Managing database versioning and changes is one of our main challenges. We have also automated the management of database changes, such as automating all the work that will disrupt the pipeline and cause waste of time.

#C.3: Writing tests for every case and each scenario is a very ingenious business. In general, tests are written from the most important and priority scenarios to the most insignificant ones. It is a fact that time can not be allocated to risky and important tests due to the time spent for unnecessary tests. Therefore, automating tests that are relatively easy to automate will be strategically more reasonable. This is also Lojika’s strategy. Scenarios that are easy to automate and do not have a lot of integration with external systems are run by Appium for both iOS and Android clients. On this side, more time can be allocated for risky and dependent scenarios.

#C.4: The risks, uncertainties and challenges in this step are very different to the previous one. Non-functional tests, however, need to be fully automated to ensure continuity and sustainability. In particular, the performance, load and stress tests of the technically risky parts, which may be overloaded by the users, are automatically performed by Jmeter and Blazemeter.

#C.5: As already mentioned, it is absolutely necessary to be approved by QA so that this phase can be successfully completed. Thanks to the Jenkins promotion plugin, the person who is responsible for promotion promotes the current build to the next stage of the pipeline.

D. Release Phase

#D.1: The same artifact that completes each stage successfully is ready to go to production-like environment this time. The purpose of this step is to create a simulation of the real environment using a production-like environment before the release.

#D.2: The same operation as in C.2 is applied.

#D.3: Some automation again. Although the artifact remains unchanged and everything is positive during the testing phase, it is a good idea to automatically test specific scenarios the last time.

#D.4: Continuous delivery without manual intervention is not possible. The good thing is that an artifact that comes up
to this stage is ready to release. Without any rule, everyone
involved uses this last build with realistic scenarios for the last time before the release.

#D.5: There is no rule that every artifact should be released. According to the pipeline logic, every promoted artifact can be released at any time. Depending on the release strategy and conditions, the release decision may be delayed or an old artifact may be chosen to release. It is to be remembered that with continuous delivery, each artifact is expected to be a ”release candidate”. But whatever it is, there must be a role to make a release decision anyway.

#D.6: After the promotion decision, the artifact is deployed to the production environment and stakeholders related to this process are informed about the changes.


In Lojika, continuous delivery has been successfully implemented and 2070 builds have been built up to now. 827 of them have been successfully uploaded to test servers. 229 of them were uploaded to the Staging server. 101 of them passed through production.

Until now, unit and component tests have been run more than 620,000 times on average.

In addition to numerical improvements, cultural and organizational improvements have been observed as well;

  • Quick response to user feedback
  • Improved communication between both product and software development teams from start to end
  • Increase in company-wide participation
  • Reduction of hot-fixes
  • Ease of managing database changes in any environment
  • Improved software quality

Conclusion and Future Work

This study attempts to explain the benefits of continuous delivery, which is part of agile discipline, on a case study. We discussed that the company can successfully implement a continuous delivery workflow in line with its own needs and improve product and software development processes at this point.

Obviously, this study promises that continuous delivery practices can be applied not only to large institutions, but also to startups.

Moreover, some steps need to be improved in relation to the workflow in figure 3. In particular, non-functional tests, for instance, stress and security tests needs to be improved. The more effective use of Docker is in our future work plans. By activating the use of Docker we aim to make deployments more efficient and effective.


[1] J. Humble. (2016) The case for continuous delivery. [Online]. Available:

[2] A. Morrison and B. Parker. (2016) An aerospace industry

cios move toward devops and a test-driven development envi-
ronment. [Online]. Available:

[3] T. Z. . A. Walgren. (2016) Huaweis cd transformation journey. [Online]. Available:

[4] M. Pais. (2016) Continuous delivery stories. [Online].


[5] Wikipedia. (2016) Systems development life cycle. [Online]. Available:

[6] D. F. Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison Wesley, 2011.

[7] C. L. Juha Itkonen, “Perceived benefits of adopting continuous deliv- ery practices,” 10th International Symposium on Empirical Software Engineering and Measurement (ESEM). Ciudad Real, Spain: ACM, 2016.

[8] K. Morris. (2016) Continuous delivery vs. traditional agile. [Online]. Available:

[9] M. Fowler. (2016) Continuousdelivery. [Online]. Available:

[10] K. Beck, Extreme programming explained : embrace change,ser. The XP Series. Boston, San Francisco, Paris: Addison-Wesley, 2000, autres tirages : 2001, 2004. [Online]. Available:

[11] B. Fitzgerald and K.-J. Stol, “Continuous software engineering: A roadmap and agenda,” Journal of Systems and Software, no. 25, pp.19–59, 8 2015.

[12] C. Tozzi. (2016) Continuous integration vs. continuous delivery: Theres an important difference. [Online]. Available:

[13] L. Chen, “Continuous delivery huge benefits, but challenges too,” IEEE Software, 3 2015.

[14] J. Allen. (2016) Patterns for continuous delivery. [Online]. Available:

[15] N. Bozic. (2016) Continuous delivery through pipelines.

[Online]. Available:

[16] (2016) Jenkins. [Online]. Available:

[17] (2016) Appium introduction. [Online]. Available:

[18] (2016) Apache jmeter. [Online]. Available:

[19] (2016) Blazemeter. [Online]. Available:

[20] (2016) Sonarqube. [Online]. Available:

[21] (2016) Jfrog. [Online]. Available:

[22] (2016) Liquibase db change management. [Online]. Available:

[23] (2016) Docker. [Online]. Available:

Adopting Continuous Delivery Practices to Increase Efficiency: A Case Study - Part 1

As a result of my work at Lojika, i've prepared a conference paper that explains a high level end to end solution framework of Lojika's software/product development. In this paper, i denoted and described fundamental approaches and various technologies to clarify how we reached the optimal level of responsiveness and effectiveness.

I divided this whole study into 2 parts to make audience not to bore. First post is all about the introduction stuff and the second part will be elaborated by explaining flaws and bottlenecks of communication flow in Lojika. Some methods / technologies to resolve these problems are also described. Furthermore, the most important part is about how we've applied continuous delivery practices / principles of agile software development to overcome all those problems in the course of my work.

You can also download this paper.

Abstract—Consistency, responsiveness and reliability are some of common issues for companies in online business that deliver value for their customers. They need to be able to bright out new ideas and changes to production without minimum technical errors. As an agile development methodology, continuous delivery presents best practices for creating reasonable and reliable builds without any special effort. No doubt about that this brings along efficiency and effectiveness. The aim of this paper is to investigate observed effects of continuous delivery on a startup company, Lojika. Due to the importance of it’s early users contribution, well-structured feedback and communication mechanism is a must requirement in order to communicate the feedback throughout the teams including product, development and marketing as quickly as possible. For that reason, probing into the development life cycle of Lojika will be very helpful to enhance the concurrent approaches with new case studies.

Keywords—case study, continuous delivery, continuous communication, build pipeline, agile development, automated testing,devops.


Rapidity, in the new era of software development is a significant necessity in the dynamic, fast growing and changing markets. Hot companies like Google, Facebook, Uber, Airbnb, Tesla are able to react changes quickly, This makes them serious competitors in the market. Many Google services see releases multiple times a week and Facebook releases to production twice a day [1]. Even in aerospace industry, responsiveness and rapid changing is an emerging factor to become more effective and efficient in their business. SpaceX, a space transport company founded in 2002, has been succeeded in integrating agile development practices into their development processes. Kevin Venner, CIO of SpaceX expresses that ”we release at least one a week, sometimes midweek” [2].

Continuous delivery is not just for startups and lean organizations. Large scale businesses and enterprises can be guided by agile practices to adopt agility on their development processes in an effective and efficient way. Huawei is one of the enterprises which have been successful in applying continuous delivery agile practices. Huawei is $40B company delivering communications technologies for telecom carriers, enterprise and consumers. Analytics regarding R&D of Huawei are tremendous; working 2000 developers worldwide, has 1000 applications, more than 2000 releases per day, more than 100.000 compile&builds per day, more than 1 million test cases run per day and so on. [3]. Another success story is about HP LazerJet firmware team. They build the firmware and the operating system that HP LazerJet printers run on. After they discovered the slowness and ineffectiveness in their operations, they revealed that ten percent of the team were spending their time on code integration. Additionally, other process were more time consuming than it should be. They re-architected their entire system from the ground. Code integration mechanism was made by continuous integration servers and they put a large amount of automated testing including both unit tests and 30.000 automated functional tests in place. After rebuilding everything, they report that their time on continuous integration is 2% and they spend 40% of their time to build new feature [4].

Despite the fact that some specific differences between all types of software development life cycles (SDLC), the most known phases are requirements gathering and planning, designing and development, testing and deployment [5]. Unlike the others, in conjunction with increasing value of the agile principles, ”efficiency in delivering software” becomes the main focus amongst them. Considering fundamental principles in agile manifesto: ”Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”[6]. The main purpose of continuous delivery is to reduce the risks associated with delivering new versions and increasing feedback and improving collaboration between development, testing and operational responsible for delivery.

The continuous delivery and deployment practices have been proposed to enable accelerated value delivery, reduce the risk of failure, increase productivity, improve visibility, feed- back and quality [7]. In many organizations, release frequency is measured in weeks or months, and the release process is certainly not repeatable or reliable. It is manual and often requires a team of people to deploy the software even into a testing or staging environment [6].

In this study, we introduce an implementation of practices of continuous delivery in a car-pooling application TAG (Tek Araba Gidelim) which is developed by Lojika. We investigate observed effects of continuous delivery on our communication and feedback structure within the organization consisting of our product, development and marketing team which are all involved in our continuous delivery process. Rest of the paper is organized below:

In the next section, some important details of continuous delivery practices are described to understand what it must be considered to apply it successfully. In the third section, the reason of why Lojika prefers continuous delivery practices rather than other approaches is described. In the forth section of the paper, implementation details of a successful adoption of continuous delivery approach within Lojika are introduced. The results of the study are presented in section 5. Section 6 concludes our study by summing up all sections and results.

Understanding Continuous Delivery

Continuous delivery is accepted as a subset of agile which the software is ready for release while development is continuing, which means that there should be no extra effort to make reasonable builds and that sets continuous delivery apart from ”traditional agile” [8].

Another well known definition of continuous delivery; ”a software development discipline where you build software in such a way that the software can be released to production at any time” [9].

In essence, whole story begins with continuous integration. Firstly, ”integration” needs to be clarified. Development in a team is a kind of process that has to contain the integration activity on every changes throughout the development. Integration phase is unpredictable and can take more time than the programming period [10]. A good rule of thumb and de-facto best practice to sustain integration activities of developed components are to use source control and configuration management tools such as Git, Mercurial or Subversion. As illustrated in figure 1, code pushing phase is the first attempt to trigger the pipeline from the commit stage by developers, the commit stage is automatically triggered process comprising interconnected steps such as compiling code, running tests (unit, acceptance and non-functional performance tests etc), code coverage and building artifacts. Whenever a failure occurs in the automatic process, it may help to ensure that problems leading to integration failures are solved as quickly as possible by those responsible[11]. Continuous integration is primarily focused on asserting that the code compiles successfully and passes unit and acceptance tests. However; it is not enough to create a continuous delivery process. [6]. Continuous delivery is more complicated and is about the processes that have to happen after code is integrated for application changes to be delivered to users. Setting up a continuous integration tool such as Jenkins or Bamboo to automate applying code changes continuously does not indicate that continuous delivery is performed. It is just limited to use continuous integration server[12].

Nevertheless; continuous integration is a first prerequisite to step up to continuous delivery, that is to say that applying the whole commit stage automatically is the inception process of the continuous delivery. Not only that but also acceptance test stage, non-functional validation stage, manual test stage and release stage must be considered to achieve a success in continuous delivery. These stages are enabled through ”the deployment pipeline”[6].

The commit stage asserts that the system works in low technical level and validate the rules that pass to be accepted that code base is stable [6]. This stage is the very early stage to provide quick feedback to developers. Developers check in code to the source control management system and continuous integration server (CI server) automatically polls the changes from source control management system. CI server compiles the source code and executes unit and integration (component) tests. Code analysis process is performed to check the quality of the code base. In case of an error in this stage, the pipeline stops and notifies the responsible developers. Otherwise, if everything goes well, the generated artifacts, reports or metadata that will be used later in the pipeline are uploaded to artifact repository server and the pipeline step up to the next stage [13]. This transition between these steps has to be done automatically. As it is illustrated in figure 1, to sustain the feedback cycle after any failure, delivery team has to be informed about the failure. Otherwise, in case of passing successfully the build & test stage, developer team should proceed their coding activities. But this does not mean that developers are done of their work. In every stage of the delivery pipeline, it is possible to get negative feedback by QA team.

According to Humble and Farley’s book[6], there are essential principles and practices to perform an effective commit stage;

• Provide fast and useful feedback. Fail fast to identify errors early. In order to do that it is extremely required to put as many unit and component tests as possible

• What should break the commit stage. Even if everything goes well and commit stage is successful, there may be something wrong with the code base quality. There should be a reasonable threshold to achieve a level that the code base is acceptable

• Tend the commit stage carefully

• Give developers ownership

• Use a build master for very large teams

The acceptance test stage is the next automatically executed stage, it is extremely valuable and it can be very expensive to create and maintain. In many organizations, acceptance testing is done by a separate dedicated team, but this strategy can cause a misunderstanding that developers cannot feel ownership and responsibility of the acceptance stage of the product. Instead of manual testing, performing an automatic test process provides quite improvements on communication structure, reliability and responsiveness:

• Manual testing takes a long time and expensive to perform, but executing acceptance tests automatically is a time saving activity. This also helps teams to focus more on critical issues rather than doing less important and repeatable tasks.

• Automating acceptance tests protects application when large-scale changes are made.

• Automating acceptance tests reduces productivity, predictability and reliability

• By automating the process, whole team owns the acceptance tests[6]

As we discussed earlier, the aim of continuous delivery is to be able to deliver software frequently and essential building blocks of a successful implementation of the deployment pipeline are given below:

• Automate everything

• Keep everything in source control

• Build quality in

• Only build your binaries once

• Deploy the same way to every environment

• Smoke-test your deployments

• Deploy into a copy of production

• The process for releasing/deploying software must be repeatable and reliable

• If something is difficult or painful, do it more often

• Done means released

• Everybody has responsibility for the release process

• Improve continuously

Continuous delivery is available for all types of companies. But there may be some procedural differences from the company to the company. Depending on it’s best practices and basic principles, companies can use tailored and differentiated models for themselves [14].

This is the end of the first part of my study. You can continue reading part 2 .


[1] J. Humble. (2016) The case for continuous delivery. [Online]. Available:

[2] A. Morrison and B. Parker. (2016) An aerospace industry

cios move toward devops and a test-driven development envi-
ronment. [Online]. Available:

[3] T. Z. . A. Walgren. (2016) Huaweis cd transformation journey. [Online]. Available:

[4] M. Pais. (2016) Continuous delivery stories. [Online].


[5] Wikipedia. (2016) Systems development life cycle. [Online]. Available:

[6] D. F. Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison Wesley, 2011.

[7] C. L. Juha Itkonen, “Perceived benefits of adopting continuous deliv- ery practices,” 10th International Symposium on Empirical Software Engineering and Measurement (ESEM). Ciudad Real, Spain: ACM, 2016.

[8] K. Morris. (2016) Continuous delivery vs. traditional agile. [Online]. Available:

[9] M. Fowler. (2016) Continuousdelivery. [Online]. Available:

[10] K. Beck, Extreme programming explained : embrace change,ser. The XP Series. Boston, San Francisco, Paris: Addison-Wesley, 2000, autres tirages : 2001, 2004. [Online]. Available:

[11] B. Fitzgerald and K.-J. Stol, “Continuous software engineering: A roadmap and agenda,” Journal of Systems and Software, no. 25, pp.19–59, 8 2015.

[12] C. Tozzi. (2016) Continuous integration vs. continuous delivery: Theres an important difference. [Online]. Available:

[13] L. Chen, “Continuous delivery huge benefits, but challenges too,” IEEE Software, 3 2015.

[14] J. Allen. (2016) Patterns for continuous delivery. [Online]. Available:

[15] N. Bozic. (2016) Continuous delivery through pipelines.

[Online]. Available:

[16] (2016) Jenkins. [Online]. Available:

[17] (2016) Appium introduction. [Online]. Available:

[18] (2016) Apache jmeter. [Online]. Available:

[19] (2016) Blazemeter. [Online]. Available:

[20] (2016) Sonarqube. [Online]. Available:

[21] (2016) Jfrog. [Online]. Available:

[22] (2016) Liquibase db change management. [Online]. Available:

[23] (2016) Docker. [Online]. Available:

Awesome Resources for Microservices and Scalability

The purpose of this post is just for wrapping up the best (at least above the mediocre ones) resources that i found on the internet (websites, blogs, books etc.). And i put effort into splitting them into parts according to the specific subjects related to scalability issues and mostly Microservice Architecture principles. I will be updating this post as much as possible.

Understanding the Fundamentals and More

Tools & Architectural Implementation
Microservice Orchestration
For Java & Spring Folk

Useful Readings on Scalability

Useful Books*

  • Reactive Microservices Architecture by Jonas Boner (mini-free ebook only 54 pages, i strongly suggest you to read it.)
  • Building Microservices by Sam Newman.
  • Developing Reactive Microservices by Markus Eisele, Enterprise Advocate, Lightbend, Inc.
  • Domain-Driven Design: Tackling Complexity in the Heart of Software (It is strongly suggested one in the ecosystem of Microservices to understand domain based thinking)
  • Patterns of Enterprise Application Architecture by Martin Fowler

* Books are not ordered by the importance.

Very Useful Resources Given in Jonas Boner's Book, Reactive Microservices Architecture
  • For an insightful discussion on the problems caused by a mutable state, see John Backus’ classic Turing Award Lecture “Can Programming Be Liberated from the von Neumann Style?

  • Neil Gunter’s Universal Scalability Law is an essential tool in understanding the effects of contention and coordination in concurrent and distributed systems.

  • For a discussion on the use of bulkheads in ship construction, see the Wikipedia page

  • For an in-depth analysis of what made Titanic sink see the article “Causes and Effects of the Rapid Sinking of the Titanic.”

  • Process (service) supervision is a construct for managing failure used in Actor languages (like Erlang) and libraries (like Akka). Supervisor hierarchies is a pattern where the processes (or actors/services) are organized in a hierarchical fashion where the parent process is supervising its subordinates. For a detailed discussion on this pattern see “Supervision and Monitoring.”

  • Our definition of a promise is taken from the chapter “Promise Theory” from Thinking in Promises by Mark Burgess (O’Reilly), which is a very helpful tool in modeling and understanding reality in decentralized and collaborative systems. It shows us that by letting go and embracing uncertainty we get on the path towards greater certainty.

  • The Unix philosophy is captured really well in the classic book The Art of Unix Programming by Eric Steven Raymond (Pearson Education, Inc.).

  • For an in-depth discussion on the Single Responsibility Principle see Robert C. Martin’s website “The Principles of Object Oriented Design.”

  • Visit Martin Fowler’s website For more information on how to use the Bounded Context and Ubiquitous Language modeling tools.

  • See Jay Kreps’ epic article “The Log: What every software engineer should know about real-time data’s unifying abstraction.”

  • Martin Fowler has done a couple of good write-ups on Event Sourcing and CQRS.

  • The quote is taken from Pat Helland’s insightful paper “Immutability Changes Everything.”

  • As brilliantly explained by Joel Spolsky in his classic piece “The Law of Leaky Abstractions.”

  • The fallacies of RPC has not been better explained than in Steve Vinoski’s “Convenience over Correctness.”

  • We are using Tyler Akidau’s definition of streaming, “A type of data processing engine that is designed with infinite data sets in mind” from his article “The world beyond batch: Streaming 101.”

  • Location Transparency is an extremely important but very often ignored and under-appreciated principle. The best definition of it can be found in the glossary of the Reactive Manifesto — which also puts it in context:

Interview Questions - String Chain

String Chain interview question was asked to me at the first step of hiring process. It was a bad experience as a whole, but i'm not a quitter. So i decided to write them here to force myself to understand deeply these kind of algorithm questions.

Copied the whole question below.


Given an array, words, of n word strings (words[0], words[1],..., words[n-1]), choose a word from it and, in each step, remove a single letter from the chosen word if and only if doing so yields another word that is already in the library. Each successive character removal should be performed on the result of the previous removal, and you cannot remove a character if the resulting string is not an element in words(see Explanation below for detail). The length of a string chain is the maximum number of strings in a chain of successive character removals.

Complete the longestChain function in your editor. It has 1 parameter: an array of n strings, words, where the value of each element words; (where 0 <= i < n) is a word. It must return single integer denoting the length of the longest possible string chain in words.

Input Format

The locked stub code in your editor reads the following input from stdin and passes it to your function: The fist line contains an integer. n, the size of the words array. Each line i of the n subsequent lines (where 0 <= i < n) contains an integer describing the respective strings in words.


1 <= n <= 50000

1 <= |words_i| <= 50, where 0 <= i < n

Each string in words is composed of lowercase ASCII letters.

Output Format

Your function must return a single integer denoting the length of the longest chain of character removals possible.

Sample Input 1
Sample Output 1

Sample Case 1: words = {"a", "b", "ba", "bca", "bda", "bdca"} Because "a" and "b" are single-character words, we cannot remove any characters from them as that would result in the empty string (which is not an element in words), so the length for both of these string chains is 1.

The word "ba" can create two different string chains of length 2 ("ba" -> "a" and "ba" -> "b"). This means our current longest string chain is 2.

The word "bca" can create two different string chains of length 3 ("bca" -> "ba" -> "a" and "bca" -> "ba" -> "b"). This means our current longest string chain is now 3.

The word "bda" can create two different string chains of length 3 ("bda" -> "ba" -> "a" and "bda" -> "ba" -> "b"). This means our current longest string chain is now 3.

The word "bdca" can create four different string chains of length 4 ("bdca" -> "bda" -> "ba" -> "a" , "bdca" -> "bda" -> "ba" -> "b", "bdca" -> "bca" -> "ba" -> "a", "bdca" -> "bca" -> "ba" -> "b"). This means our current longest string chain is now 4.

Given Empty Method
static int longestChain(String words[]) {  

My Solution
import java.util.*;

 * Created by cancobanoglu on 11/09/16.
public class StringChain {

    public static void main(String[] args) {
        String[] words = {"a", "b", "ba", "bca", "bda", "bdca"};
        int longestChain = longestChain(words);
        System.out.println("longestChain " + longestChain);

   static int longestChain(String words[]) {
        List<String> wordList = Arrays.asList(words);
        int max = Integer.MIN_VALUE;
        for (String word : words) {
            int current = processWord(word, wordList);
            if (max < current) {
                max = current;

        return max;

    static int processWord(String word, List<String> allWords) {
        if (word.length() == 1)
            return 1;

        Stack<String> wordsStack = new Stack<String>();

        // start and end indices which hold character is dropped from the current word
        int startIndex = 0;
        int endIndex = 1;

        while (!wordsStack.isEmpty()) {
            String currentWord = wordsStack.peek();

            if (endIndex > currentWord.length()) {

            if (allWords.contains(currentWord)) {
                StringBuilder wordBuilder = new StringBuilder(currentWord);
                wordBuilder.delete(startIndex, endIndex);
            } else {

        return wordsStack.size();

Overview on Google’s Physical Web Project and Bluetooth Smart for Beacons

You can download this research article.


Mobile and wireless communication market are characterized by rapidly changing technology, evolving customer demands and with the rapid growth of wireless communication services and network technology, many new wireless technologies are emerging [1]. Beacon technology is one of the new Bluetooth communication technology which uses low-powered transmitters and enables the smart phones and other devices to perform tasks when they are in proximity with any beacon [1].
In this paper, several approaches for building mobile communication services based on the detection of physical objects via network proximity are discussed. In this context, we can mention Quick Response (QR) code. QR code is a 2D barcode used for mapping URLs to physical objects. Wireless tags are also one of the most used approaches for detecting physical objects and wireless tags also support Bluetooth Low Energy (BLE) and Wi-fi protocols, which are supported by almost all mobile phones [2].
Nowadays, there are also more secure wireless communication methods like RFID, NFC and Bluetooth technologies. RFID and NFC technologies use radio waves to allow a reader or scanner to communicate with a device that has some sort of electronic tag built in or added to it. The NFC is derived from RFID (Radio Frequency Identification) and works by creating a “near field” (approximately 10 centimeters near) using high frequencies that allow to interact to NFC devices, that is to say that devices that have NFC module [3]. NFC technology is mostly used for secure transactions like contactless payment. Bluetooth Low Energy (BLE), which is new version of Classic Bluetooth, are both short range wireless data transfer technologies, even though the range at which BLE operates is much longer: tens of meters compared to a few centimeters for NFC. Whereas NFC is focused on one to one data exchange, BLE allows for multiple simultaneous connections.
Companies are always struggling to produce ideas to come up with engagement problems and they do not hesitate overprice to overcome them. Because of the improvement of wireless communication technologies like Bluetooth Low Energy (BLE) by any other name, Bluetooth Smart, and if we take into consideration of major increasing of smart mobile usage recently, companies could easily access to their customers and push any information to their customers mobile devices with lower costs via beacons which use Bluetooth technologies perfectly. In this context, the importance of Bluetooth technologies and Google’s latest experimental beacon technology, “Physical Web” are going to be reviewed in this paper.


Classic Bluetooth is a wireless technology standard for exchanging data over short distances (short wave length) from fixed and mobile devices [4]. Technology was originally designed for continuous, streaming data applications including voice and has successfully eliminated wires in many consumer as well as industrial and medical applications. Classic Bluetooth technology will continue to provide a robust wireless connection between devices ranging from headsets and cars to industrial controllers and streaming medical sensors. Many of these connections are not good candidates for the new Bluetooth low energy technology, but many other new applications will be [5].

On the other hand, Bluetooth Low Energy (BLE) is an emerging wireless technology developed by the Bluetooth Special Interest Group (SIG) for short-range communication. In contrast with previous Bluetooth flavors, BLE has been designed as a low-power solution for control and monitoring applications. BLE is the distinctive feature of the Bluetooth 4.0 specification [6].

With the introduction of Bluetooth Low Energy (BLE) technology, there has been considerable interest in its possibilities in both the media and the market. Bluetooth Low Energy (BLE) technology also has important limitations as well as benefits. It is quite different from Classic Bluetooth technology so different that one carefully needs to consider which technology best fits the application needs [5].
The key feature of Bluetooth Low Energy (BLE) is its low power consumption that makes it possible to power a small device with a tiny coin cell battery such as a CR2032 battery for 5–10 years. As with Classic Bluetooth technology, Bluetooth low energy technology operates in the 2.4 GHz ISM band and has similar radio frequency (RF) output power; however, because a Bluetooth Low Energy (BLE) device is in sleep mode most of the time and only wakes up when a connection is initiated, the power consumption can be kept to a minimum. Power consumption is kept low because the actual connection times are of only a few mS. The maximum, or peak, power consumption is only 15 mA, and the average power consumption is of only about 1 uA [5].
Because of BLE’s lower power consumption, quicker connection set-up and larger number of potential connections in relation to the original Bluetooth technology, Classic Bluetooth, BLE allows for indoor mapping using beacons. It is also possible to enable mobile in-store payment transactions using BLE.


Beacon in wireless technology is the concept of broadcasting small pieces of information. The information may be anything, ranging from ambient data (temperature, air pressure, humidity, and so forth) to microlocation data (asset tracking, retail, and so forth) or orientation data (acceleration, rotation, and so forth). The transmitted data is typically static but can also be dynamic and change over time. With the use of Bluetooth low energy, beacons can be designed to run for years on a single coin cell battery. [7].
How do they work ? We can use the analogy of a lighthouse. A beacon has one simple purpose in life, and that is to send out a signal and say i am here. It is completely unaware of any mobile devices that are around it. It doesn’t connect to them. It doesn’t steal their data and doesn’t know anything. It just sends out the signal and says “hello”. According to the analogy of a lighthouse, ships are the mobile devices [8].
What is beacon protocol ?. Just as Wi-Fi and Bluetooth are standards of radio communication, beacon protocols are standards of BLE communication. Each protocol describes the structure of a data packet beacons broadcast [9].

Beacons are platform independent. There are several protocols which were developed by different providers like Apple, Google etc. Apple did come out originally with the iBeacon protocol. Beacon is physical device with antenna and Bluetooth Low Enery stack that can send out packets. But it doesn’t mean that Android devices and other devices cannot see those iBeacon devices.
The other known protocol is Eddystone, which is an open BLE protocol developed by Google. The advertising packet is naturally different from that of iBeacon. In fact, Eddystone is designed to support multiple data packet types, starting with two: Eddystone-UID and Eddystone-URL. There’s a third type of packet: Eddystone-TLM, as in “telemetry.” This packet is broadcast alongside the Eddystone-UID or Eddystone-URL packets and contains beacon’s “health status” (e.g., battery life). This is mainly intended for fleet management, and because of that, the TLM “service” packet is broadcast less frequently than the “data” packets [10].
iBeacon provides two API methods for apps to detect iBeacons devices: ranging, which works only when the app is active, and provides proximity estimations; and monitoring, which works even if the app is not running, and provides a binary “in range” and “out of range” information [10].
In this paper’s context, Eddystone-URL packet ties directly into the concept of Google’s Physical Web. It will be discussed later.
Beacons are not internet connected: as we have mentioned before, beacons are just responsible for sending out signals. Once we position them, they are like a lighthouse. They are unaware of themselves and any other devices that are around them. They are not connected to wi-fi. They can just send BLE packets to those that they say “hello”. But some of industrial beacon devices have capability of connecting wi-fi. They might need to be updated new and observed their battery.
Beacons do not steal your data: You actually need to have explicit opt in from your customers. They need to download your mobile app. They need to give access to location. And then by doing that, you can take one of the beacon IDs for a beacon you have in a physical location and ask the app and operating system to monitor for that specific beacon ID. When the mobile device sees it in the wild, it will let the app know that it is seeing a beacon. Then the app can go back to your server and say, "Hey, as customer ABC, I've seen XYZ beacon," and then you know to take action. Opt in is required. The download is required. Privacy is very important with beacons. It is very well locked down. It shouldn't be a customer concern.
Beacons have ability to determine distance. The most basic use of beacon technology is to determine how far a mobile device is from a beacon, but as anybody who has played with beacon ranging knows, these distance estimates can have a significant degree of uncertainty. For a beacon that is 5 meters away, distance estimates might fluctuate between 2 meters and 10 meters.
Consequently, there are several beacon technologies and their data transmission way via Bluetooth Smart protocol is developed by different companies. Also their capabilities are almost the same. But not being entirely apples and oranges, iBeacon and Eddystone (Physical Web, Eddystone-URI) function for quite different purposes and have very little overlap in their use cases [12].

Understanding the Physical Web

The Physical Web is entirely about person to machine interaction and standardising a universal usage method for internet of thing (IoT) [12]. The aim of this project is to provide “interaction on demand” so that people can walk up and use any smart devices without the need for intervening mobile apps. This would make it possible for users to simply walk up to a bus stop and receive the time until the next arriving bus, without any additional software needed [13].
The manifesto of The Physical Web is, “The number of smart devices is going to explode, and the assumption that each new device will require its own application just isn’t realistic. We need a system that lets anyone interact with any device at any time. The Physical Web isn’t about replacing native apps: it’s about enabling interaction when native apps just aren’t practical.” That is to say that, the Physical Web does not require a dedicated App. Any Physical Web browser will see all other Physical Web Beacons (allias; UriBeacon, gBeacon) within their immediate vicinity” [14].
As we have mentioned before, the difference between kinds of beacons like iBeacon and Physical Web is protocols that are developed by different companies. Working principles of the Physical Web and iBeacon and the differences between iBeacon and the Physical Web are shown in table [1].

The Physical Web can advertise 28 bytes packet containing an encoded URL. The approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when someone arrives to the university campus, He/she can start an application that will scan for nearby the Physical Web beacons or can open his/her Chrome browser and do a search. The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests. A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest. These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now [15].

As it could be seen from the figure 3, when consumers want to search the vicinity around them, Chrome browser will show all URLs that nearby Physical Web beacons provide.

Figure 3: The Physical Web how it works [17]

Edystone Url

Eddystone-URL packet contains a single field: URL. The size of the field depends on the length of the URL. The promise and purpose of the Eddystone-URL packet ties directly into the concept of Physical Web. Whereas with iBeacon or Eddystone-UID there’s a need for an app to take the beacon’s identifier and translate it into certain actions, with Eddystone-URL the data is encoded directly in the beacon’s advertising packet. This means that the user can access content—usually in form of a website—without the developer needing to build a native experience. Remember that a Physical Web–enabled browser is needed to detect Eddystone-URL packets. Currently, that’s Chrome and Opera for iOS—and more apps are coming, including Chrome on Android. Alternatively, you can build your own Physical Web browser, or use Google’s Physical Web scanner app (available on iOS and Android). The URL could be a regular web page providing relevant information—e.g., a beacon next to a movie poster could broadcast a link to a YouTube trailer. It also could be a dynamic web application, with contextual parameters embedded in the URL—e.g., [10].

To deploy beacons using the Physical Web, use Eddystone’s URL frame type (Eddystone-URL) to broadcast your website to users. Associate your beacon with any arbitrary URL and deploy it to a location. Users with a Physical Web-supported client such as Chrome can discover the website on their device.


The number of smart devices is going to explode, and the assumption that each new device will require its own application just isn't realistic. We need a system that lets anyone interact with any device at any time. The Physical Web isn't about replacing native apps: it's about enabling interaction when native apps just aren't practical.

REFERENCES (Accessed 28 Dec. 2015).
Namiot, Sneps-Sneppe, “The Physical Web in Smart Cities”, IEEE (Accessed 29 Dec. 2015) (Accessed 29 Dec. 2015) (Accessed 20 Dec. 2015)
Gomez, Oller, 2012, “Overview and Evaluation of Bluetooth Low Energy: An Emerging Low-Power Wireless Technology”, Sensors 2012, 12, 11734-11753; doi:10.3390/s120911734
Joakim Lindh, Bluetooth® Low Energy Beacons, Jan. 2015, Application Report, Texas Instruments (Accessed 29 Dec. 2015) (Accessed 29 Dec. 2015) (Accessed 30 Dec. 2015) (Accessed 30 Dec. 2015) (Accessed 30 Dec. 2015) (Accessed 30 Dec. 2015) (Accessed 31 Dec. 2015) (Accessed 31 Dec. 2015) (Accessed 31 Dec. 2015) (Accessed 31 Dec. 2015)