Resources:

Enterprise-Wide Kubernetes, Ep 5: Automated Continuous Delivery to Kubernetes


Read the Transcript

Anh Nguyen: Welcome everyone, and thank you for joining today’s webinar, Automated Continuous Delivery to Kubernetes. Presented by Damien Toledo, VP of Engineering and co-founder at Nirmata.

Today’s presentation will be about 40 minutes with a 10 minute for Q & A. We encourage you to pose your question in the ask a question tab. Also feel free to check out the attachment and links area for the presentation slide and PDF and for the enterprise wide Kubernetes data sheet. With that I turn it over to Damien, enjoy.

Damien Toledo: Hi everyone. In the last four years we’ve seen definitely a lot of companies moving to the cloud. And they are moving their application, and in the last couple of years I would say that we are also seeing companies using more than one cloud. So that’s really a trend that has increasing.

We have a choice of clouds right now between Google computer engine, Amazon Azure, and they are all now production ready. We’ve definitely seen people using more than two clouds, and in some cases also they are trying to use hybrid solutions. They have workloads on prime and workloads in the public cloud. That provides a lot of advantages definitely for large and small companies in terms of choices for pricing, choices of technologies, choices of regions for deployment.

However, for people in charge deploying the application in these clouds, that represents some challenges, a new set of challenges. Today we’ll focus the discussion around the development of CI/DC pipeline in the context of multi-clouds, multi-cluster, and multi-applications. We’ll see what the challenges are and what the solutions can be from that.

Just a brief words of introduction about me. I am the co-founder and VP of Engineering Nirmata. Previously I worked at Atos in Europe, and Jetstream. I worked in a couple of startups like Jetstream, Meru Networks, and also at the Netscout.

I have about 20 plus years of experience in developing enterprise software and telecom software. My focus is really distributed systems. I love developing anything really to distributed system and I love automation as well.

Today we’ll cover CI/CD and we’ll see what Kubernetes can bring in the picture. Why is Kubernetes interesting when we start building CI/CD pipelines. We’ll review the benefits and challenges in using Kubernetes especially in the context of multi-cloud environment. And then we’ll do a quick demo, and then I’ll take some questions.

I will assume for this presentation that most of you are somewhat knowledgeable with Kubernetes, but I have two sentences, two different Kubernetes. Kubernetes is an open source system that can be use to automate the deployment and scaling and management of containerized applications.

It’s interesting because it’s really a production ready platform that you can use to deploy your applications. And it provides also application portability. You can deploy your application on pretty much any cloud. Also, something very interesting is that this is today one of the most successful open source projects, and it has a very large ecosystem. Kubernetes has been very well designed so it’s extensive and you can provide plugins for pretty much anything.

One thing I wanted to discuss initially is to realize that they are really two categories of CI/CD pipelines, one that we would call the push model, and the other one the pull model. The push model is what we have done for many years already. If you use Jenkins chances are you are doing a push model. So when you want to deploy an application Jenkins is taking all the decisions to push whatever is required to start your application in your environment.

The pull model is a bit more I would say recent. And sometimes it can be called gitops, you may have heard of the term gitops, so that’s what I’m referring to here with the pull model. Especially with Kubernetes now we will see that it’s possible to actually describe the desired state of your application into Kubernetes and Kubernetes will be the system in charge of monitoring the changes made to a git repository, and each time a change is made Kubernetes will actually try to update your application. Which really in this case Kubernetes is pulling the definition of the application from a git repository.

For this presentation however, I’m going to focus exclusively on the push model. I’m assuming that the majority of people today are using CI/CD pipelines, using a push model. So we’ll see what benefits you can gain from using Kubernetes, some of the challenges and how to solve them.

Let’s take a look first at the typical components that you would find in CI/CD pipeline using a push model. At the top here you see that we have the source code management. It could be anything, many people use git for instance. Next, you would have continuous integration system. Again, there are many choices available out there, they are just represented gently.

Something that could be new for some of you if you have already implemented continuous integration, at this point you can introduce Kubernetes. And we’ll discuss what Kubernetes brings to the picture, but that’s where it would sit in this layer cake.

And then you would find also Docker Registry. Your continuous integration system is going to build our code and generate some artifacts. And the artifact that are important in this context are Docker image that needs to be deployed as your cloud application as one of the clouds of your choice.

With Kubernetes the second type of artifact is a set of VMware files describing your application. Once your continuous integration system has generated these artifacts then you are ready to deploy. Kubernetes works on top of Docker so at that point we are deploying containers on one of the clouds. At the bottom you can see the infrastructure.

Let’s talk a little bit about what Kubernetes brings into the picture, and there are really interesting points here. The first one I’d like to mention is really the separation of concerns. If you use a Kubernetes cluster to deploy your applications your DevOps teams will be able to focus really on the application itself. At this point once your Kubernetes cluster has been deployed a lot of things related to infrastructure management and container management is already taken care of. So, as long as you have a functional Kubernetes cluster now the focus for you DevOps team is going to be really the application management. And that’s something I really like, having the right people focused on the right problem.

Another very interesting thing provided by Kubernetes is that it uses what is called a Declarative Model. It means that in order to deploy an application you need to define that application, to define some kind of a blueprint if you want. And there is a language that has been created for that, and you use YAML to describe your application. There is a set of constructs that you can use to actually describe your application.

The advantage of using this approach is that you can really describe the desired state of your application and store that in the source code management system. And every time you make a change, you make a change in your source code management system, and you can really keep track of what has been changed to your cloud.

Another thing interesting is Kubernetes has been designed around a set of very powerful and comprehensive abstractions. I mentioned here things like what we call services, deployments, stateful sets, sequence, maps, ingress etc. I don’t know actually there are, there are quite many. What’s very interesting is that it was clearly designed by people who had the application first in mind. The level of abstraction is very good to actually model applications and complex applications, and any type of applications. A lot of people think about using cloud for elastic applications with microservices which are stateless. But actually, when you use Kubernetes it provides a lot of support for handling legacy applications that are absolutely not stateless. All these constructs exist in Kubernetes.

Another thing interesting is that Kubernetes also assist them to construct containers, and it provides built in workflows that are very suitable for building CI/CD pipelines. For instance, I mentioned here two features, there is a built in rolling of data and roll back for your services. So let’s say you have a service, let’s say payment service that you want to deploy, this service may have multiple instances deployed on different virtual machines or instances. When you want to update that service you can just send a simple command to Kubernetes with the new version, specify the new Docker emergency, “Please do a rolling update on my services.”

Also, you can conform very precisely to do the rolling of this. There are some parameters that can help you to control this process. And if something fails you can also roll back to the previous version. Here, if you have implemented yourself these kind of features by scripting or even writing programs, it’s hard to do. Here it’s really built in, in the system.

In addition to that Kubernetes also provides what I would call kind of a toolbox to build even more powerful workflows that typically you want to have in your CI/CD pipeline. Things like for instance blue/green deployments, or canary launch. For that they are not, I would say completely built ins, not like the previous workflows that I mentioned where you can just send one command. You’re going to have to do bit a of work on top of Kubernetes to actually implement this type of behavior or this type of workflow.

The thing that are available to you are things like services, ingress, or some extensions like ESTO to control traffic.

Let’s talk about the challenges. It should be clear that Kubernetes bring a lot of good features to build a CI/CD pipeline. One thing that I forgot to say while we were reviewing the benefits of Kubernetes, I haven’t mentioned all the benefits in kind of a general way, it was for CI/CD. And again, for the challenge’s same thing, I want to focus more on the CI/CD context.

The first challenge that I would mention is that if you have to deal with multiple clouds Kubernetes is not going to help you a whole lot. It provides something very important which is application portability, but then you’ll have too much multiple clusters. And it means that for you if you have a CI/CD pipeline you may have to run that pipeline in each cloud. Or if you have a centralized solution, you’re going to need some kind of VPN to access your clouds. Not impossible to solve obviously, but that creates some complications.

Second thing is that you’ll have to deal with multiple clusters. There is in the future, or there is already some features related to cluster federation in Kubernetes, but it’s still not there yet. Typically, what we do is to deploy a dedicated cluster in each cloud if you start using multiple clouds. In this case you’re going to have to deal with different credentials, one set of credentials for each cluster. And potentially also one set of YAML files for each instance of the application you are going to deploy.

Another thing that is kind of hard today with Kubernetes is that I mentioned that you have all these constrict, services, deployment, stateful set etc. If you want to know the state of a complex application that is made of multiple services it’s not easy. Actually, you’ll have to query a lot of objects to figure out if everything is okay, and if everything is not okay what are exactly the constrict or part of the application that’s not really functioning. The next point is actually troubleshooting, in any case it’s hard in any kind of distributed system. For that you’re going to need some tools, some support.

The other thing also, some aspect of the governance is also not very easy to do. Defining all the access control, doing the auditing for multiple clusters, or dealing with resource allocations of your clusters. All that is not straight forward. These are the challenges.

Now, what we are proposing and Nirmata is that you can introduce a layer between you CI/CD pipeline and all you Kubernetes clusters, and all your clouds to actually deal with what I just mentioned before. Ideally what you want to have is one access point, one end point, and from that you could control multiple applications. You could control application running on different clouds, and applications running on multiple clusters. From your CI/CD pipeline you don’t have to deal with multiple endpoints.

One thing we do provide at Nirmata that’s interesting in the context of multi-cloud is we perform policy driven deployments. It means that when we deploy, when an application is deployed by your CI/CD pipeline, we can analyze what type of environment it is being deployed. Based on that we can change some part of your YAML files to actually adapt the deployment to the environment. For instance, as an example, if you deploy an application in a [unintelligible 00:16:45] environment or in production you are not going to use the same password for instance. In that case we can actually change the credentials on the fly to make sure that the right credentials actually land in the right environment.

As I said, it’s difficult to deal with the overall application health status. That’s another thing that’s important for CI/CD pipeline. Because when you push the application or change to the application, you want to move to the next step which could be for instance, starting some test automation only if the application has been successfully deployed. Again, if you have to query many objects or use a lot of different APIs to figure this out, then that’s going to represent a challenge for your team. So providing an overall status is something very powerful for CI/CD pipeline.

Another thing that maybe required also is to deal with things like team isolation, or make sure you can isolate your applications. Typically, we use clusters to run multiple applications and potential multiple users, multiple teams can also share the same clusters. If you want to deal with that doing it just manually, I would say directly on top of Kubernetes is very hard. There are some constructs available but they are not so easy to use.

The same way if you want to define safe and secure access control to all your applications, and if you have a lot of users, controlling that is a bit difficult with Kubernetes. Having a centralized solution helps a lot in the context of your CI/CD pipeline. Same thing for auditing, once you want to know what has changed, who has changed what and when it’s something always complex in a distributed environment.

Another thing that we do provide is alarm, both for infrastructure and application. And again, that’s something that’s not provided by Kubernetes at least today.

One question also you may ask yourself is, if I deploy now my application using a set of YAML files that I have to give to Kubernetes how do I deal with the promotion of my code and my services from one environment to the next? Here I mentioned three environments for instance the DevTest environment, the QA environment, and production.

There are multiple solutions, I’m not saying that that’s the only solution, but one easy one that you can use again because everything because we use this Declarative Model is you can store the desired state of your application in the Git repository. Then you can leverage all the Git features, especially branches. You can have a branch for your DevTest environment, another branch for your QA environment, another branch for your production environment. And then the only thing you have to do is, sometimes it may not be so easy but you can merge the code from one branch to the next. That’s one way of dealing with this.

All right, next I’ll do a demo. Before I switch to the product and do the demo, I’d like to introduce a few key concepts and also discuss the workflow that we can address with the solution we have, or the solution I’m going to demo. Some of the concepts I’d like to introduce is for the concept of catalog. I said that in Kubernetes you can describe your services and your collection of services as an application using YAML files. We’ll say that with Nirmata you can actually store also or load these definitions into a catalog and then the catalog becomes a self-service for applications for your different teams, your developers.

Once your application is in a catalog it can be deployed in an environment. An environment is the logical grouping of running applications that have been deployed using the catalog. An environment could be, for instance based on region, you could have a production east, production west. Or it could be based on our organization. You have an environment for your DevOps teams, you have an environment for production. And environment for performance test or things like that.

The next thing that we’ll see during the demo, we’ll see that we can address multiple workflows that typically people try to implement in their CI/CD pipeline. And I define here two categories of workflows. What I call the Long Running applications and the Short Lived applications.

For Long Running applications, typically here what you want to do is you have an application that is running all the time in the cloud somewhere and you want to keep pushing updates. It can be very suitable for your development teams typically, they may have a copy of the applications they are working on. It’s running constantly and they use that to integrate the codes between the different teams or different developers.

In this case your CI/CD pipeline is going to build the codes. Probably it’s the artifacts that we discussed before. The two sets of artifacts are really the Docker image, and then the set of YAML files that needs to be provided to Kubernetes. At that point we see that you can use a plugin in Jenkins to actually push these changes to your running application.

After we provide the state about the application, once we confirm that the application is up and running, everything has been updated, your CI/CD pipeline can continue with the next steps. And typically, what you can do is to run some automated tests. That’s one type of workflow for Long Running applications.

A second workflow for Long Running applications this time is, you can also leverage the catalog that we are providing. This time what you can do is instead of updating directly one specifically running application you can update the catalog, and I can show you that. Nirmata who will figure out who has been running this application, who has been using the application from the catalog. And Nirmata will push or propagate these changes to all the running applications that is using the catalog. And then again you can run your test.

For the short-lived application, I mentioned one workflow. And actually, here I mention here overall three workflows, but you can combine that in different ways. And you can actually mix the two. For short lived application I’m saying this more for QA teams, they tend to do that sometimes. Some of the teams for my customer that’s what they do is they don’t have an application running at all times, they want to deploy their entire application, test it and then deal with the application. They do that because they have to deal with many applications and they want to save cloud resources for instance. So they’re going to reuse the same system to test multiple applications.

Again, in this case you’re CI/CD pipeline is going to build the code, publish the artifact. And at that point what you can ask the Jenkins plugin to do is to deploy the entire application from scratch. It’s not an update this time, it’s a complete deployment of the application. One the application is up and running you can execute your test. When your tests are executed you can invoke the Nirmata plugin to delete the application. An eventually also one thing that I’ve seen is that people they update the catalog. For instance, if you use our catalog as the self service for applications, once in a while you have to update the catalog with new versions of the application. That’s one way to validate before you update the catalog that everything is correct. That the new version is working fine.

Let’s switch to the demo itself now. All right, what I’m showing here is the Nirmata platform. It can run on prem or in the cloud. We have a service that you can sign up to if you want to try the product. Or you can deploy also on your own prem.

Here I have already several clusters that are available. And you can see that some of them are in the fail state, and have a couple of clusters, Kubernetes clusters running. We can take a quick look at all the details for the cluster. You can see all the Kubernetes components themselves and their payoff, and some stats about that. We can see also the availability of the clusters, that’s something interesting. You can see over time how your clusters are performing.

I’m going to show you the catalog that I was talking about. The catalog it has these applications so you can find some of the applications you may know already, things like Drupal, some ghost nginx, etc. Here for this demo we’re going to use simple hello-world application, also shopme application or something that emulates shopme applications.

Here are the details you would see about these shopping applications. You can see that it has multiple microservices. Make it a little bit bigger for you. You can see it has microservices, customers, deals, loyalty, orders, etc. And then for each of them you can drill down and really see the details. You can see the same thing using the YAML language what will be eventually provided to Kubernetes, but here’s just a graphical presentation.

What I suggest is actually pretty powerful, because if you are not an expert in Kubernetes that’s going to help you a lot. One thing that you can see here for instance is the container image, the docker image that’s being used for this customer service. You see that it’s in the Nirmata registry and it has a tag green. That’s now the catalog. All these applications can be used by your developers to now deploy the application.

Applications can be deployed in environments, switch to cart view here, so each cart is an environment. And as I said before, an environment is a logical grouping of running applications. For instance, if I go in this production-west environment, you will see that there is just one hello-world application running right now. If I go to a different environment and you see shared, and we have some ghost application.

Let me go back to that ghost application, can drill down and look at the details. This one what we are looking at is really a running application. Here it’s running actually in one of our labs. The other one is running on the other device. You see here the multi-color specs. Right here I have environments, some of them are on AWS, some others in our data center. For me as a developer it doesn’t really matter, I manipulate my application exactly the same way.

The first thing we’re going to do is we’ll track this workflow that consists in deploying the entire application. What we’ll do is we’ll have this application in the catalog, the shopme application. From Jenkins we’ll try to deploy that from scratch. See how this can be automated.

Here I have Jenkins, and I have already created a project to deploy this application. Probably if you are familiar with Jenkins you will see that the beginning is fairly straightforward. I provide the git repository where all my artifacts are stored, meaning all my YAML files. Maybe I can show you that as well quickly what it looks like. Here I’m in github, search for that application on github, it’s actually right here. This is the repository shopme YAML, and for each microservice I have a directory and then I can find the YAML files. Then you see the description using the YAML description for this customer service for this example.

You can organize your files the way you want, it doesn’t matter. You can have everything in one file. All the files in the same directory, would say that all of that is supported. First step you provide the credentials for your repository while your YAML files are stored. You can specify the branch. And again, that’s where you could use, or you could deal with promoting the code from one environment to another. Here is where I simplified the demo, but typically you would build the code and generate the artifacts. I skipped all that part, I’m going to really focus on how you can configure the plugin here to actually deploy the application.

If you install a Nirmata plugin you will see that it’s implemented or it implements a build step, let me show you that. Here you see evoke Nirmata service. You can add the build step like this. Then to configure it, first you have to select the type of work flow that you want to execute. I told you that you have the choice between deploying the entire application, updating your running application. You can delete the application, you can update the catalog. That’s what you would select first.

In our case here what we want to do is to deploy the application in an environment, so we select this. You can provide the Nirmata import, so it could be our production system running on AWS. Or if you have installed Nirmata locally that would be something in your data center.

You provide the credentials, so here again if you want you can have just one API key to talk to Nirmata and it’s provided by the UI. Once you have provided that, then what we can do it’s nice here that we can discover what’s running in your cloud. Again, that’s what I was mentioning is that if you don’t have this kind of layer on top of Kubernetes, that’s going to be something difficult to deal with. That’s you’ll have to have credentials for everything to configure and points for everything etc.

Here you need just one set of credentials, and we can discover all your environments. Here for instance I’m going to select production-west. If you remember production-west was one of the environments. And now what I can do is to deploy this application, I’m going to give it a name. And here I could deploy it either from the catalog or from the git repository.

Here again there is a lot of flexibility in terms of selecting the right files that you want to deploy. Here for instance I’m going to restrict actually what I want to deploy from this application. I think I have a deal service, I should have an order. Select just these services, and maybe a third one, let’s see what we have, customer. And you could even select even certain patterns and things like that. That’s all you have to configure. And again, here we can deal with any cloud as long as they have been configured and your clusters have been set up. I’m going to save that and I could build that, see what happens.

You can see that the plugin’s actually taking all the files, picking up all the files. And then let’s go back to our environment. You see now the application is being deployed. We can drill down to that and you see the three services that I have selected could start seeing maybe some detail. Actually, we could see live here from the UI what’s really happening between your CI/CD pipeline and the Kubernetes clusters that we have selected in this case. You see all the construct here, all the resources, all the Kubernetes resources we’re indicating the step by step as they are being applied to the Kubernetes cluster and if it’s successful. If there is an error you would get the details on that.

You can even take a look at the exact YAML definition that was provided to Kubernetes in this case. It’s going to take a minute or two for everything to settle down. One thing that you can do if there is an error you can start looking at the logs of a few application containers. Again, from the central point of entry it doesn’t matter on which cloud you are at this point, you can deal with multiple clouds at the same time. You’ll see that now the application is running.

One thing I mentioned previously which is important I think to me, is to have this concept of application grouping of services. And now you have an overall state telling you that everything is fine. And we can check every aspect of your application to actually compute the overall state.

Once you deploy you could even exit into a container if you have the authorization to do it. For instance, again from a central point, here I’m inside my customer service. I could inspect things if I suspect some issues. You see that the deployment of the completed application. And we do deploy complex applications that have 30 plus services, or each of them multiple instances.

On your side, in your CI/CD pipeline, you also get details about what’s happening while the application is being deployed. If I go at the end of this we actually return you the number of tasks that are pending and their state. And you know exactly at the end or the plugin knows that at the end the application is up and running. You can move on to the next step that would be testing. That’s one workflow, here we are deploying the entire application.

Another workflow is the update for long running applications. Here we already have this hello-world application running, that’s a simple one. What we’ll do is we’ll try to update it. Let’s take a look at one thing, right now it has a tag, latest. We see that the current version for that docker image is latest. Let’s try to simulate somebody, a developer has modified that service, he’s done the check in, Jenkins has compiled that code and generated a new image. The YAML files have been updated with this new tag, so let’s simulate that, we’ll do it manually here. I go to my bid bucket, I have the repository for that hello-world service. And the tag which is latest. I’m going to change that to something else, just to simulate the fact that a developer has made a change to this service. I’m going to use a blue tag.

Let’s take a look at how the plugin this time was configured. It’s slightly different from what we’ve done before. I have my repository. The difference here is that we selected the update application workflow. And just to show you another feature, I selected to update the catalog. You could update the running application itself but I’m going to update the catalog directly and we’ll see the effect of that. Let’s execute this workflow, take a look at the log. That’s the YAML we are pushing to Nirmata. Let’s go to the catalog and take our hello-world application. You’ll see that the tag has changed now. We’ll try to make it a bit bigger. We see that the tag is blue. That’s exactly what I entered in my source code management.

Now, the interesting part is Nirmata is actually scan all your environments now, and it should have detected that there is a hello-world application running in this environment. Now we have deciding cone displayed. It says that there is a pending change. So let’s take a look at this. Here it tells me that the deployment has changed in the catalog and it’s asking me if I want to actually propagate this change to this running application. And I can review the exact change that was made. I can see in my YAML file that what was changed was the tag of the service from latest to blue. And here I can accept or reject that change. Here I’m going to accept it in this case.

Then we will see that this will be propagating, actually that’s what’s happening right now. You see that Kubernetes now is pulling the new image it should be with a tag blue. And once this one will be up and running Kubernetes will delete the old version of this service. And I can just verify that this one is also a blue tag, and that’s what we are seeing here.   

That’s how you can use the Jenkins plugin in this case along with Nirmata, to actually handle multiple clouds. One thing I haven’t shown you here is that actually each environment is associated to a particular cluster. You see here it’s associated to this cloud cluster. I could go to that cluster and then again have the same view with [unintelligible 00:43:57]. As a developer or as a person who has to deal with the CI/CD pipeline, to me I have just one end point and from that end point I can discover all my clouds, all my clusters, all my applications. And I can use the same type of jobs to update many different applications.

This concludes the demo. I’m ready to take questions if we have questions. I’m going to switch to that.

Anh: The first question we have is, “can Nirmata manage containers on bare metal servers?”

Damien Toledo: Yes, absolutely. Kubernetes can run on bare metals. As long as you have Docker running on bare metals then everything is fine.

Anh: The next question we have is, “what are the Kubernetes construct that can be used to implement blue/green deployment?”

Damien Toledo: There is actually a very good blog post in the Kubernetes documentation itself. if you search for blue/green deployment, I think you will find it easily. Really, what’s interesting is you can use the construction of services and pods. What is explained in that article is that for instance if you have a service, let’s take the example of the payment service here for the shopping application. You would define three services actually instead of one service. You would have let’s say a service called production payment, and then you would have a service called blue payment and green payment.

And then you would have pods or a running container for each of these versions. You would have a container running the blue version, a container running the green version. What you can do with Kubernetes is that you can select for each service which of the container, what we call also pods, I mean it’s not the same thing but container runs as part of pods. You can select which pod is selected by your service. Then by just changing selectors, pod selectors you can switch from your production from blue to green, or from green to blue.  

And then on top of that you may need also things like ESTO to control some traffic or to set some routes also in an ingress. It’s a combination of different things that you have to use, that’s why it’s not really a push button type of functionality. But everything is there to implement that.

Anh: Okay thank you. And then the next question we have is, “what is the purpose of grouping running applications into environments?”

Damien Toledo: They have several benefits and several features associated to that. First just from pure organization point of view if you have … you can associate an environment to a team and they will just see only their application that they care about inside that environment.

Or you could do that by region as I mentioned before. But then once you have defined these organizations for your applications, what’s interesting you have features like resource quota, you can associate quotas for your resources to each environment, and you can control how the resources inside your clusters are going to be shared by your teams, your developers, or your applications.

Same thing in terms of access control. You can restrict the access to certain applications, to certain teams or to certain users. Once you have this kind of grouping it helps you in terms of governance, I would say over all.

Anh: Thank you, Damien. Are there any other questions? If there aren’t any more questions that concludes today’s webinar. Thank you very much. And for more information on today’s topic, please visit our website at www.Nirmata.com. Thank you.

Damien Toledo: Thank you everyone.