Welcome!

Machine Learning Authors: Zakia Bouachraoui, Liz McMillan, Roger Strukhoff, Pat Romanski, Carmen Gonzalez

Related Topics: Java IoT, Microservices Expo, Microsoft Cloud, Machine Learning , Agile Computing, Cloud Security

Java IoT: Blog Post

Efficient Development Workflows Series: Developing a New Feature

How to save development time by working efficiently

This is a republished blog post. Original source: http://blog.codeship.io/2013/08/16/the-codeship-workflow-part-1-developing-a-new-feature.html


With this blog post we start a new series about how we work on the Codeship. Many people asked us how we develop features, about our workflow and which apps we use every day.

This blogpost focuses on the workflow to implement a new feature. From branching away from master until it is ready for the pull request. The following blogposts will talk about our internal communication, how we do pull requests and code reviews and an in-depth look into our deployment strategy.

Git branching model
We follow the Github-flow model of development (check out Scott Chacon's article, so whenever we start a feature we create a feature or bug branch. Most of our team usesgit-extras by visionmedia for this.

The Codeship Workflow - Branches

Typically only one person works on one branch. If we need more people to work on a feature we break it down to the smallest possible chunk that one person can ship.

For example consider one of our latest improvements: Ben, who joined us in July, implemented a first basic version that allowed to invite collaborators from Github. He worked on his own feature branch and had a simple UI that was ready to be shipped. After the feature passed the pull request and code review his changes were shipped.

Then Alex created another feature branch from master and implemented the final user interface which makes it super easy to invite anyone to the Codeship who committed to the GitHub repository.

The Codeship Workflow - Teammate Invitations

Both committed and pushed regularly while still working on the feature. When a small piece is done we typically push it to Github to run our complete test suite. While Codeship runs all of the integration tests we keep working on the feature.

This way we very quickly see if our changes broke any part of the application without running the full test suite locally. And breaking a feature branch is absolutely ok. We want our developers to push early and often and let Codeship take care of the tests so they do not waste time.

The Codeship Workflow - Steps of your builds

There are numerous advantages in shipping a minimum viable feature first. We keep waiting times between the developers to a minimum while still shipping improvements very quickly. Thereby we remove a lot of unnecessary communication. And we never run into any kind of merge problems when two developers work on the same feature.

Of course there are challenges with this workflow. Sometimes features are shipped with the expectation that they are improved right afterwards, but something else needs immediate attention. This way the improvement can take a bit until it is shipped. Therefore getting that minimum viable feature right is very important. Big enough to be valuable, but small enough to be shipped fast and by a single person.

We are very interested in your workflows, so please leave a comment how you are working on new features. If you have any questions leave a comment, send us an in-app message or a tweet to @Codeship.

Next time we will show you how we go from code to pull request, code review and then merge into master. Stay tuned.

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.