Uncategorized

INTEGRATE2018–Azure (Service Bus) eventing and messaging

Dan Rosanova talked about and showed some numbers and features of Azure Event Hubs and Azure Event Grid.

There are an amazing and insane number of messages being handled by this service today.

IMG_2206

In Clemens session he talked about the difference between Messaging and Eventing (among other things).

messaging vs eventing

Dan went into more detail talking about what Event Grid is for…

what event grid is for

…and what the core concepts for Event Grid are.

event grid concepts

As well as what Event Hubs are and how those work.

IMG_2208

IMG_2210

IMG_2213

IMG_2214

This table gives a high overview of some of the concepts of the services and their differences.

clip_image001

In a very over-simplificated picture of the differences between grid and hubs you can say that Event Hubs is for Fan In and Event Grid is for Fan Out scenarios. They are different tools that works well for different things. There is no one tool that works best for everything and in that sense there is no one service that works best for everything.

IMG_2217

Johan Hedberg comments: The fact that there is no once size fits all or no silver bullet is as true as always. Especially working with cloud services there is overlap in what the different services offer. You can see how you can solve a problem regardless if you choose service A or B. The services we are using today in many cases are not the services that we began with. They just didn’t feel like the right fit as we started of. However as we tried and tested and monitored and evaluated our way through them, we found the services that worked best for us – were most stable, supplied the best throughput and latency and kept the costs low. For several of those scenarios we today use Service Bus services. Even if we didn’t start there. This to me highlights another benefit of cloud services. It’s just, in most cases, so easy to actually try a new services. Also, to keep on the Service Bus / Messaging topic, these services feel very mature and very stable at this point in time. Monitoring is not always easy though, although the metrics you need are there. You just need a good way to gather, react on and display them. Being at INTEGRATE2018 we should probably take that chance to look more at the features of https://www.servicebus360.com/

Uncategorized

INTEGRATE2018–API Management

Miao Jiang talked about the rise of APIs and the increasing importance as well as the increasing standardization and existence of APIs in applications today. API’s today are a key business driver for many companies. The API economy is a well known term today. Again, this summary will only contain some key highlights. Like with the other technologies there will be deepdive sessions at integrate later in the week.

To meet the the importance of APIs today, you have API Management. In the use of API’s there are basically two roles. Publishers and consumers. For any of these roles there are a number of key concerns. To help with these is where API Management comes in.

IMG_2193

For many of the needs you have within API management there are policies.

The API-M service has seen an impressive growth like many of the other cloud services.

IMG_2194

For API Management, the documentation starting point is here.

The main msdn forum is here.

The uservoice page is here.

The planning board is here. Miao mentioned some of the recent things the team has done.

IMG_2195

And that is just part of it, there are many other things in the product.

IMG_2196

As far as planning goes there is more on the planning board than can be shown in one screenshot, so go there to view all of it. A selection of the board is given in the screenshot below.

image

Johan Hedberg comments: I love the Application Insight integration. To me, having good insight is key to trust and understanding of what your application running in a cloud service is doing and how it is doing. I have a feeling based on what I have heard and seen so far that people sometimes start forgetting about tracing, logging and following up their application or code as they deploy to cloud services. Logging and examining tracing and metrics is not less important just because you deploy to cloud. It is as important or perhaps even more important! Deploying to cloud services does mean spending less time on infrastructure and plumbing, but it does not mean less quality requirements. You still want the same quality, or better. Just because you do not have a server to log into to view your logs does not mean you should not monitor your logs. There is so much functionality in the platform for monitoring. It’s up to us to use it.

Uncategorized

INTEGRATE2018–BizTalk (Hybrid Integration)

Paul Larsen and Valerie Robb presented on BizTalk Server.

Some of the highlights and most important parts of the session were the fact that the end of mainstream support is closing up on previous version of BizTalk Server.

clip_image001

They also went through the features of the Feature Pack 1 & 2 talked about the upcoming FP3.

IMG_2188

The key focus for FP3 is new adapters. Among those O365 mail, calendar and contacts. This were demo’ed using the added OAuth service that has been connected to the ESSO service within BizTalk.

The team is also keeping up with new CU’s.

image

There were a number of things that have advanced statuses on the BizTalk uservoice that were not covered in today session that I hope are still given some love.

BizTalk has more then one forum, but the general forum one is here.

The team blog is here.

The BizTalk Server core documentation’s root folder is here, while the FP2 configuration and walkthroughs are here. If you have BizTalk Server 2016 Developer or Enterprise (with Azure EA or SA) you can download the Feature Pack here.

Johan Hedberg comments: For me BizTalk is more and more a part of the Hybrid Integration story, rather then THE integration story. So far the on-premise data gateway is far too limited to be a replacement for all of the on-premise integration needs that exists within mature established companies today. As the Integration Service Environment (ISE) gets established and the possibility to connect to vnets and through that more broadly, while still securely, connect to on-premise resources this might be of lesser importance as the technologies progresses, but today it’s note there. However, I do not think that Logic Apps coming to Azure Stack will be a major thing for the role of BizTalk, but I could be wrong. Time will tell.

Uncategorized

INTEGRATE2018–Azure Functions

Azure Functions is Serverless. You do not have to think about servers. You only think about your code. It’s event-driven and scales instantly. You are only charged for when your code is actually running and when it’s being executed.

functions serverless

If you have an IoT doorbell which triggers of a an image and uses image recognition to identify whoever is at the door – that functionality is available all of the time, but it only runs as someone actually shows up at your door. Serverless helps building applications in a fraction of the time it used to take.

Azure Functions is part of the Microsoft Serverless platform.

functions platform

This session was an introduction to Azure Functions. Rather then trying to repeat everything said I’ll give some highlights. If you want to learn more I recommend there resources.

If you want to start learning about Azure Functions, the MSDN starting point is here.

The App Service team blog, with Azure Functions tagged posts is here.

User voice is here.

MSDN forum is here.

“If you are doing something more complex, I recommend you use one of the more advanced IDE’s.” Aka don’t use the portal for production-grade code and functionality. Both Visual Studio and the Visual Studio Code IDE’s have great tooling. Visual Studio probably has a slightly better experience today, but both can be used.

Azure Functions are often used in Azure today to extend on the functionality of other services.

functions where it fits

You can run Functions on variety of locations.

clip_image001

Jeff also talked about some anti-patterns. As you being building production grade code on Azure Functions these are good to keep in mind.

Another topic covered was durable Functions.
That led to the discussion about Logic Apps or Durable Functions for some workloads.

clip_image001[4]

One key takeaway from this is, like in so many other cases, that personal preference will play a very big role.
You can read more at http://aka.ms/durablevslogic

Johan Hedberg comments: We’ve been using Functions for a while now. For several customer in several different projects. It works very well. It’s great both as a way of doing something very quickly, but also great for production use. Especially with the integration with Application Insights you get a very easy to build and easy to deploy application that still allows you very detailed insight into the workings of the application. Since it’s Serverless you pay per use (unless you actually tie it to your own App Service instance, which you can) you don’t pay for the number of Function Apps you have, or the number of Functions. Think microservices. You can quickly end up with a mess (a lot of Functions with questionable responsibility boundaries) if you are not careful. Also, another lesson quickly learned is that even if Azure Functions scales well and quickly, if you are using a limited resource, like a database, you still have to take care. If you are using Azure Functions with Logic Apps, you can easily set up a CI/CD process that will deploy your Functions and Logic Apps together. I have a previous blog post as well as a video available from last years Integrate conference.

Uncategorized

INTEGRATE2018–Logic Apps

Kevin Lam delivered an introduction to Logic apps. This session was a walkthrough of what Logic Apps is and the features and functionalities it has today. A 200 level session. Comprehensive but non-detailed. A few demos based on how easy it is for you to integrate Logic Apps with other services by Derek Li. Sessions later in the conference will go more into depth on selected parts of the different features.

Are you just getting started with Logic Apps. There is great MSDN content out there. Here is a starting point. And of course alot of community blog content as well.

This session showcased improvements in the Event Grid trigger to be able to filter on certain events in the trigger itself without having to receive the message and filter with a condition inside the workflow. It also showcased the important, and extremely easy way to use additional services, like Functions or Cognitive services.

In another session on Logic Apps later in the day the team also talked about “Enterprise Integration with Logic Apps” in which they mentioned the new features of the enterprise connectors. With some SOAP connector news well as SAP adapter improvements. With SOAP you can now create first-class SOAP connectors by importing wsdl (also see custom connectors). You can also create passthrough SOAP services and use the on-premise data gateway to access services behind the firewall.

IMG_2219

For the SAP connector, it registers itself (the gateway) with SAP to receive an event / a message instead of polling.sap connector

There they also talked about some of the news in mapping. Like XLST 2.0/3.0 maps. XLST 3.0 maps allows you to do things such as doing a JSON > XML > JSON mapping. It also has a load of more built in functions and other things such as performance improvements, dynamic evaluation etc.

IMG_2220

They also talked a bit about what’s new in monitoring.

IMG_2221

One of the more interesting parts of the introduction/overview session was of course the “What’s coming?” section:

  • China cloud
  • Smart(er) designer, make the development process faster
  • Dedicated and connected – ISE, vnets, dedicated stamp
  • Testability – Being able to supply input to triggers, mocking etc.
  • On-prem – Azure Stack
  • Managed Service Identity – now with MSE your Logic App can be given an identity to avoid using private identities.
  • Oauth request triggers – expand the triggers from SAS token to make it possible to also do OAuth
  • Output property obfuscating – Add the ability to mark property as not being exposed and be encrypted
  • Expanded Key Vault support – using Key Vault to pass parameters to connectors. Today it’s possible to use it as you deploy.

Later they also talked about what’s new in Enterprise Integration specifically.

IMG_2222

Do you want to know more about what is coming? You know the team has their planning board online? It’s here.

image

Don’t miss the monthly updates directly from the team here.

Also catch up using the team blog.

Also, if you are missing something, make your voice hear using the uservoice here.

Got a question: The msdn forum is here.

Johan Hedberg comments: This session is of course a s must have. Of the 400 or so in attendance about 50% raised their hands when asked “who has used Logic Apps?”. More than I expected if that means “in production” but less than expected if it means “have ever tried it out”. Great to see the innovation of new features continuing. And again, features to enable you to reduce your usage of the platform and the instances started unless you actually want it to are great. Even though some things, like SOAP support for the on-premise gateway has been added, I still want to see more hybrid capabilities and more feature in the on-premise data gateway.

Uncategorized

INTEGRATE2018–The Microsoft Integration Platform

Starting the line of sessions at INEGRATE2018 is Jon Fancey with “The Microsoft Integration platform”. He spent the first half of his session talking about digital transformation, the speed of change, disruption and innovation. Since it’s a keynote type of session, its kind of broad and I expect establishes a baseline to build upon. He emphasized the importance of changing, and being the one that initiate the change. He also mentioned how Microsoft works with customers, ISV, partners and community to help with that.

Matthew Fortunka, Head of Development for Car Buying and Confused.com was brought on to showcase how the technology available had been used to transform the way they do their IT.
Confused is one of the leading car insurance companies in the uk. When they started moving to Azure infrastructure it was a lot about IaaS.
clip_image001

Their next step was building a cloud native version of the application.
clip_image001[4]
They were initially “skeptical” about using Logic Apps, but they soon found benefits with the visual designer, the speed of creating something with it and the possibility to communicate to product owners and other developer around it. They are now also looking at expanding into Functions and API-M.

“Without the technology none of the rest of the company would exist”.

After listening to Matthew, Microsoft continued and more people from the team joined through several demos to showcase a lot of technologies working together to form a complete Microsoft Integration platform.

clip_image001[6]

We also saw use of the Integration service Environment (ISE) that allows you to run your private environment, private stamp, of the Logic app integration capabilities (now available in private preview).

clip_image001[8]

Another nugget in this presentation was the use of the SAP connector through the on-premise gateway using a trigger. The trigger was webhook based and the address of the webhook was registered on the gateway. So no more of that pesky polling that can highly affect your Logic App costs and gains if you need to poll often.

Johan Hedberg comments: Microsoft are doing a lot of cool things within the integration sphere. Most notably the messaging, serverless and iPaaS space has brought on a whole new set of tooling over the last couple of years. And even BizTalk Server has seen a rejuvenation. But, it (BizTalk) is also a product that has stayed the same for a long time. Stability is actually one of it’s key strengths. It’s also made it possible for a large group of people to reach a reasonable level of experience of creating great customer solutions using it. It will be interesting to see during the rest of the conference what change or disruption Microsoft will continue to bring or announce as the conference continues. From a high level, moving from infrastructure, managing servers yourself, to pay-as-you-go cloud services, with the possibility to easily add more, start small and build as you go allows a completely different approach to how to onboard new technology.. Adding things as you are ready, with an initial cost of next to nothing, and a speed that compared to having to create your own infrastructure for it is amazing. That is disruptive. But it’s not really new any more. From a technology standpoint. But. It is still new for a lot of companies out there. And it is an amazing possibility change for them. At the same time, the demos showed and highlighted that it’s not and either or. You can use the new technologies to extend and expand on your current integration solutions. Over time I believe the cloud is the way to go, anything else seems unlikely, but today – you do not have to go all in, nor do you have to choose. Use both. Make it work for you, the way you want. You’re in control.

Azure, Logic Apps, VSTS

Visual Studio Team Services Logic Apps continuous integration and deployment

..or “The code to my INTEGRATE2017 session”.

So, INTEGRATE2017 just ended (actually at the point this post is out it ended about a month ago). Fantastic event! If you’ve never been, in short it’s an integration focused event running over the period of a couple of days in London featuring Microsoft product group and community speakers. Regardless if you were there this year or not I very much encourage you to be there next year (or catch the US version in October).

I had a session: “Logic Apps continuous integration and deployment using Visual Studio Team Services”. In it I showcased a process for doing just what the title says: A process for developing Microsoft Azure Logic Apps solutions with GIT source control hosted in VSTS and using the build and release management capabilities to package and deploy Logic Apps, Integration Account artifacts; schemas and maps, as well as Functions.

The video and slides are available on the INTEGRATE2017 site (or will be soon) and the slides are also available from me here.

What I’d also like to share is the sample code and scripts that I used, available here. As well as stepping through the actual Visual Studio Team Services configuration step by step with screenshots and configuration strings.

The primary purpose of creating this process is the requirement to have a repeatable build, release and deployment pipeline that would look the same, work the same, and be configured the same way for all developers in the team over time building a lot (think in terms of 1500+) integrations based on Logic Apps. One of the goals is also to limit the amount of work each developer has to do when building an individual integration and offloading some of that to the build and release steps, like for example creating a resource group template for deploying the developed Azure Function or adding the code to enable diagnostics in every Logic Apps resource group template. There are many more steps and actions we have taken along those same lines, and perhaps I will have more blogposts about it, however I will try not to make a mess of the red thread of this post by getting side tracked with those. Much like I tried to simplify some things and keep my talk focused I will with this post.

As far as the actual build, release and deploy process much can be improved and made even easier than it is. With a few simple tweaks to the build and release definitions (as long as you follow a naming convention) things can be easily changed to use more of the built-in variables so that the definitions be easily cloned from one integration to another without any changes. The name of the build or release definition is all that will change and all that is needed. Not all integrations will include a Function, nor schemas and maps, and if so these steps can simply be removed.

Overview

So first of, the process. It is covered in more detail in the slides as well as the video, so go there for more coverage. Here I am just pasting an image of it to have it as a reminder of what we are trying to achieve.

image

The Visual Studio solution

Also. before we dive into what the VSTS process looks like, let’s also take a look at the VS.Net solution and project structure so that we know what we are trying to, in the end, deploy. Again, the code is available here.

image

To sum it up, we have:

  • A Resource Group Template project, containing the Logic App and the API Connection(s)
  • A Enterprise Integration Pack aka Integration Account project containing two schemas and a map, and also the xslt that is the result of compiling the map.
  • An Azure Functions project

Except for what is contained within the Visual Studio solution for a specific integration it should also be noted that we have a structure in source control that contain some of the other shared artifacts that we will need and use in the build and release definitions. Let’s briefly go through those as well.

First of I have my scripts folder.

image

For the purpose of this walkthrough it contains two scripts:

  • Deploy-AzureIntegrationAccount.ps1, which is a help script to deploy schemas and maps into an integration acount from a folder.
  • Enable-AzureRmDiagnosticsSettings.ps1, which is a help script to enable diagnostics logging, in this case to an Microsoft Operations Management Suites workspace.

I will not go into detail on any of these scripts here. Just not what they are and what they do.

I also have a shared ResourceGroupTemplates folder.

image

That for the purpose of this walkthrough contains two files, of which one is significant.

  • FunctionAppTemplate.json, which contains a generic Azure Resource Group Template to deploy a Functions App. This is so that this template does not need to defined each and every time an integration is developed, merely the unique parameters to it is needed.
  • EXAMPLE FunctionApp.env.parameters.json, which contains an example of the unique parameters that each Function App needs to supply. If you look at the Visual Studio solution we have and the Functions App project you can see that it holds two parameter files. One for test and one for prod.

So, now that we have looked at what we are going to deploy, let’s look at the Visual Studio Team Services build and release definitions.

Visual Studio Team Services build definition

The main purpose of the build step and the build definition is to create a package for the release step to release to an environment. Not all of the project types that we are working with can be built by VSTS, for example the Functions project (which is also the reason why the xslt is needed in the source controlled project). Except for the fact that ideally it would be best if we could actually build them – the build itself being a validation that the code contained within the project has passed that quality check point – we do no need to build them to be able to deploy them.

The build definition contains 6 steps, well, seven actually.

image

  1. (Get the code from source control)
  2. Build the Resource Group Template solution – the Logic App
  3. Copy the Schemas and Maps artifacts
  4. Copy the Functions
  5. Copy the Shared Functions Template
  6. Copy the Scripts
  7. Finally. Publish all of the prepared artifacts and publish them so that they are available to be used by the release definition.

And since I’ve called this continuous build and not only build, we also have a trigger for when this build should be triggered. In this case setup to trigger when changes are made to the master branch (most often in our process by completing a pull request).

image

The way I see things, most of the steps are self explanatory in their names, so I’ll simply go through the steps with a screenshot and a textual representation of their significant configuration with no more explanation then that. If I do not show parts of the configuration then that’s because there is no configuration made to those sections and the defaults are in use.

Build solution

image

Project: **/INT0001_ProcessPurchaseOrder/*/*.deployproj

Clean: checked

Copy Schemas and Maps

image

Source Folder: $(Build.SourcesDirectory)/INT0001_ProcessPurchaseOrder/INT0001_ProcessPurchaseOrder_Artifacts

Contents: **/?(*.xsd|*.xslt|*.ps1)

Target Folder: $(Build.ArtifactStagingDirectory)/artifacts

Copy Functions

image

Source Folder: $(Build.SourcesDirectory)/INT0001_ProcessPurchaseOrder/INT0001_ProcessPurchaseOrder_Functions

Contents: **

Target Folder: $(Build.ArtifactStagingDirectory)/functions

Copy Shared Functions Template

image

Source Folder: Shared/ResourceGroupTemplates

Contents: FunctionAppTemplate.json

Target Folder: $(Build.ArtifactStagingDirectory)/functions

Copy Scripts

image

Source Folder: Shared/Powershell/scripts

Contents: *

Target Folder: $(Build.ArtifactStagingDirectory)/scripts

Publish Artifact

image

Path to Publish: $(Build.ArtifactStagingDirectory)

Artifact Name: output

Artifact Type: Server

That’s it for the build definition. Let’s now look at the release definition.

Visual Studio Team Services release defintion

The purpose of the release definition is to take the created by the build (as described above in the build definition step) and use the artifacts within to deploy the build to an environment.

The release definition consists of 5 steps.

image

  1. Deploy Integration Account Schemas and Maps
  2. Deploy FunctionsApp template (aka create the “Functions application container”)
  3. Deploy Functions (aka use Web Deploy to deploy the Functions project I built)
  4. Deploy Logic Apps
  5. (Run a Powershell script to enable Azure diagnostics and ship them to my OMS workspace)

The last step isn’t needed. I added it to show of another small thing I consider a best practices.

Before we look at each step involved, let’s again look at the continuous aspect of it. In the release definition I have my trigger configuration set to enable Continuous Deployment, meaning that as soon as a new artifact version is available (as soon as a new build completes) a new release will be created.

image

As you can see I have defined two environments. Test and Prod. For Test I have deployment set up so that as soon as a new release is created a new deployment to that environment is automatically triggered. For Prod it is configured as manual. For test there is no approval needed, but for prod I have also setup approval so that when someone does request a deployment to be made to prod it must first be approved, or Bruce will get angry (which is a reference to something I said during the presentation if you haven’t seen it, meaning that not everyone should be allowed access to production in this scenario).

I have it setup so that any project administrator can approve the deployment, but you can create your own groups or point to specific individuals directly as well.

image

Now for the tasks that will be triggered once a deployment is made to an environment (I will give screenshots only for test, but you can quite easily figure out what it would have looked like for prod – the steps are all the same).

Deploy Integration Account schemas and maps

image

(Azure Powershell)
Connection Type: Azure Resource Manager
Azure Subscription: (In my case INTEGRATE2017) This would be your Test or Prod (or whatever) subscription.
Script Type: Script File Path
Script Path: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/scripts/Deploy-AzureIntegrationAccount.ps1
Script Arguments: -rootPath ‘$(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/artifacts’

Deploy FunctionsApp template

image

(Azure Resource Group Deployment)
Subscription: (see comment on subscription in previous step)
Action: Create or update resource group
Resource Group: (in my case $(Release.DefinitionName)-test) If you deploy to test and prod in different subscriptions then you can just leave this as $(Release.DefinitionName), provided that’s what you want.
Template location: Linked artifact
Template: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/functions/FunctionAppTemplate.json
Template parameters: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/functions/INT0001Functions.test.parameters.json

Deploy Functions

image 

App Service name: INT0001Functions-test (here, if you deploy test and prod to different subscriptions you must still have different suffixes since a globally unique name is required – this name must also be the same as the name given in the parameters file in the previous step)
Package or folder: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/functions
Publish using Web Deploy: Enabled

Deploy Logic Apps

image

(Azure Resource Group Deployment)
Action, Resource Group, Location, Template location as before.
Template: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/INT0001_ProcessPurchaseOrder/LogicApp.json
Template parameters: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/INT0001_ProcessPurchaseOrder/LogicApp.test.parameters.json

Enable Diagnostics

Even though this step is optional, let’s look at it anyway for completeness.

image

(Azure Powershell)
Script Type: Script File Path
Script Path: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/scripts/Enable-AzureRmDiagnoticsSettings.ps1
Script Arguments: -resourceName $(Release.DefinitionName)

Wrap Up

That’s it. I think. There are alot of moving parts and I am sure there could be some additional explanations required depending on your previous experiences and knowledge. But the solution is all here. The links to the code and the explanation of the VSTS configuration. If you have any further questions please feel free to contact me.

As a follow up, in case you have API Management in your solution as well and would like to use VSTS for it as well, have a look at the VSTS pipeline described by Mattias Lögdberg here.