Azure, Logic Apps, VSTS

Visual Studio Team Services Logic Apps continuous integration and deployment

..or “The code to my INTEGRATE2017 session”.

So, INTEGRATE2017 just ended (actually at the point this post is out it ended about a month ago). Fantastic event! If you’ve never been, in short it’s an integration focused event running over the period of a couple of days in London featuring Microsoft product group and community speakers. Regardless if you were there this year or not I very much encourage you to be there next year (or catch the US version in October).

I had a session: “Logic Apps continuous integration and deployment using Visual Studio Team Services”. In it I showcased a process for doing just what the title says: A process for developing Microsoft Azure Logic Apps solutions with GIT source control hosted in VSTS and using the build and release management capabilities to package and deploy Logic Apps, Integration Account artifacts; schemas and maps, as well as Functions.

The video and slides are available on the INTEGRATE2017 site (or will be soon) and the slides are also available from me here.

What I’d also like to share is the sample code and scripts that I used, available here. As well as stepping through the actual Visual Studio Team Services configuration step by step with screenshots and configuration strings.

The primary purpose of creating this process is the requirement to have a repeatable build, release and deployment pipeline that would look the same, work the same, and be configured the same way for all developers in the team over time building a lot (think in terms of 1500+) integrations based on Logic Apps. One of the goals is also to limit the amount of work each developer has to do when building an individual integration and offloading some of that to the build and release steps, like for example creating a resource group template for deploying the developed Azure Function or adding the code to enable diagnostics in every Logic Apps resource group template. There are many more steps and actions we have taken along those same lines, and perhaps I will have more blogposts about it, however I will try not to make a mess of the red thread of this post by getting side tracked with those. Much like I tried to simplify some things and keep my talk focused I will with this post.

As far as the actual build, release and deploy process much can be improved and made even easier than it is. With a few simple tweaks to the build and release definitions (as long as you follow a naming convention) things can be easily changed to use more of the built-in variables so that the definitions be easily cloned from one integration to another without any changes. The name of the build or release definition is all that will change and all that is needed. Not all integrations will include a Function, nor schemas and maps, and if so these steps can simply be removed.

Overview

So first of, the process. It is covered in more detail in the slides as well as the video, so go there for more coverage. Here I am just pasting an image of it to have it as a reminder of what we are trying to achieve.

image

The Visual Studio solution

Also. before we dive into what the VSTS process looks like, let’s also take a look at the VS.Net solution and project structure so that we know what we are trying to, in the end, deploy. Again, the code is available here.

image

To sum it up, we have:

  • A Resource Group Template project, containing the Logic App and the API Connection(s)
  • A Enterprise Integration Pack aka Integration Account project containing two schemas and a map, and also the xslt that is the result of compiling the map.
  • An Azure Functions project

Except for what is contained within the Visual Studio solution for a specific integration it should also be noted that we have a structure in source control that contain some of the other shared artifacts that we will need and use in the build and release definitions. Let’s briefly go through those as well.

First of I have my scripts folder.

image

For the purpose of this walkthrough it contains two scripts:

  • Deploy-AzureIntegrationAccount.ps1, which is a help script to deploy schemas and maps into an integration acount from a folder.
  • Enable-AzureRmDiagnosticsSettings.ps1, which is a help script to enable diagnostics logging, in this case to an Microsoft Operations Management Suites workspace.

I will not go into detail on any of these scripts here. Just not what they are and what they do.

I also have a shared ResourceGroupTemplates folder.

image

That for the purpose of this walkthrough contains two files, of which one is significant.

  • FunctionAppTemplate.json, which contains a generic Azure Resource Group Template to deploy a Functions App. This is so that this template does not need to defined each and every time an integration is developed, merely the unique parameters to it is needed.
  • EXAMPLE FunctionApp.env.parameters.json, which contains an example of the unique parameters that each Function App needs to supply. If you look at the Visual Studio solution we have and the Functions App project you can see that it holds two parameter files. One for test and one for prod.

So, now that we have looked at what we are going to deploy, let’s look at the Visual Studio Team Services build and release definitions.

Visual Studio Team Services build definition

The main purpose of the build step and the build definition is to create a package for the release step to release to an environment. Not all of the project types that we are working with can be built by VSTS, for example the Functions project (which is also the reason why the xslt is needed in the source controlled project). Except for the fact that ideally it would be best if we could actually build them – the build itself being a validation that the code contained within the project has passed that quality check point – we do no need to build them to be able to deploy them.

The build definition contains 6 steps, well, seven actually.

image

  1. (Get the code from source control)
  2. Build the Resource Group Template solution – the Logic App
  3. Copy the Schemas and Maps artifacts
  4. Copy the Functions
  5. Copy the Shared Functions Template
  6. Copy the Scripts
  7. Finally. Publish all of the prepared artifacts and publish them so that they are available to be used by the release definition.

And since I’ve called this continuous build and not only build, we also have a trigger for when this build should be triggered. In this case setup to trigger when changes are made to the master branch (most often in our process by completing a pull request).

image

The way I see things, most of the steps are self explanatory in their names, so I’ll simply go through the steps with a screenshot and a textual representation of their significant configuration with no more explanation then that. If I do not show parts of the configuration then that’s because there is no configuration made to those sections and the defaults are in use.

Build solution

image

Project: **/INT0001_ProcessPurchaseOrder/*/*.deployproj

Clean: checked

Copy Schemas and Maps

image

Source Folder: $(Build.SourcesDirectory)/INT0001_ProcessPurchaseOrder/INT0001_ProcessPurchaseOrder_Artifacts

Contents: **/?(*.xsd|*.xslt|*.ps1)

Target Folder: $(Build.ArtifactStagingDirectory)/artifacts

Copy Functions

image

Source Folder: $(Build.SourcesDirectory)/INT0001_ProcessPurchaseOrder/INT0001_ProcessPurchaseOrder_Functions

Contents: **

Target Folder: $(Build.ArtifactStagingDirectory)/functions

Copy Shared Functions Template

image

Source Folder: Shared/ResourceGroupTemplates

Contents: FunctionAppTemplate.json

Target Folder: $(Build.ArtifactStagingDirectory)/functions

Copy Scripts

image

Source Folder: Shared/Powershell/scripts

Contents: *

Target Folder: $(Build.ArtifactStagingDirectory)/scripts

Publish Artifact

image

Path to Publish: $(Build.ArtifactStagingDirectory)

Artifact Name: output

Artifact Type: Server

That’s it for the build definition. Let’s now look at the release definition.

Visual Studio Team Services release defintion

The purpose of the release definition is to take the created by the build (as described above in the build definition step) and use the artifacts within to deploy the build to an environment.

The release definition consists of 5 steps.

image

  1. Deploy Integration Account Schemas and Maps
  2. Deploy FunctionsApp template (aka create the “Functions application container”)
  3. Deploy Functions (aka use Web Deploy to deploy the Functions project I built)
  4. Deploy Logic Apps
  5. (Run a Powershell script to enable Azure diagnostics and ship them to my OMS workspace)

The last step isn’t needed. I added it to show of another small thing I consider a best practices.

Before we look at each step involved, let’s again look at the continuous aspect of it. In the release definition I have my trigger configuration set to enable Continuous Deployment, meaning that as soon as a new artifact version is available (as soon as a new build completes) a new release will be created.

image

As you can see I have defined two environments. Test and Prod. For Test I have deployment set up so that as soon as a new release is created a new deployment to that environment is automatically triggered. For Prod it is configured as manual. For test there is no approval needed, but for prod I have also setup approval so that when someone does request a deployment to be made to prod it must first be approved, or Bruce will get angry (which is a reference to something I said during the presentation if you haven’t seen it, meaning that not everyone should be allowed access to production in this scenario).

I have it setup so that any project administrator can approve the deployment, but you can create your own groups or point to specific individuals directly as well.

image

Now for the tasks that will be triggered once a deployment is made to an environment (I will give screenshots only for test, but you can quite easily figure out what it would have looked like for prod – the steps are all the same).

Deploy Integration Account schemas and maps

image

(Azure Powershell)
Connection Type: Azure Resource Manager
Azure Subscription: (In my case INTEGRATE2017) This would be your Test or Prod (or whatever) subscription.
Script Type: Script File Path
Script Path: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/scripts/Deploy-AzureIntegrationAccount.ps1
Script Arguments: -rootPath ‘$(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/artifacts’

Deploy FunctionsApp template

image

(Azure Resource Group Deployment)
Subscription: (see comment on subscription in previous step)
Action: Create or update resource group
Resource Group: (in my case $(Release.DefinitionName)-test) If you deploy to test and prod in different subscriptions then you can just leave this as $(Release.DefinitionName), provided that’s what you want.
Template location: Linked artifact
Template: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/functions/FunctionAppTemplate.json
Template parameters: $(System.DefaultWorkingDirectory)/$(Release.DefinitionName)/output/functions/INT0001Functions.test.parameters.json

Deploy Functions

image 

App Service name: INT0001Functions-test (here, if you deploy test and prod to different subscriptions you must still have different suffixes since a globally unique name is required – this name must also be the same as the name given in the parameters file in the previous step)
Package or folder: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/functions
Publish using Web Deploy: Enabled

Deploy Logic Apps

image

(Azure Resource Group Deployment)
Action, Resource Group, Location, Template location as before.
Template: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/INT0001_ProcessPurchaseOrder/LogicApp.json
Template parameters: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/INT0001_ProcessPurchaseOrder/LogicApp.test.parameters.json

Enable Diagnostics

Even though this step is optional, let’s look at it anyway for completeness.

image

(Azure Powershell)
Script Type: Script File Path
Script Path: $(System.DefaultWorkingDirectory)/INT0001_ProcessPurchaseOrder/output/scripts/Enable-AzureRmDiagnoticsSettings.ps1
Script Arguments: -resourceName $(Release.DefinitionName)

Wrap Up

That’s it. I think. There are alot of moving parts and I am sure there could be some additional explanations required depending on your previous experiences and knowledge. But the solution is all here. The links to the code and the explanation of the VSTS configuration. If you have any further questions please feel free to contact me.

As a follow up, in case you have API Management in your solution as well and would like to use VSTS for it as well, have a look at the VSTS pipeline described by Mattias Lögdberg here.

Azure, BizTalk, News, Presentation

BizTalk Server futures–Presentations from TechEd North America

I have already relayed this information to so many, and given the links to more, that I though I’d put them up here for easy access. There is much and more to be written about the content, but I’ll settle for this. Information has been available around BizTalk Server 2010 R2 for some time, but it got much more real and saw some things unveiled not previously mentioned or detailed. In short:

Application Integration Futures: The Road Map and What’s Next on Windows Azure: Video Slides

Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azure: Video Slides

Azure, Installation, SQL, Windows

SQL Server VC++ Installation voes

I’ve installed SQL Server any number of times over any number of versions, but I have never had this problem before, and I am not sure why I got it now. However, since searching the web gave me very little in the way of a direct, working, solution I thought I’d write mine down. I was using Windows Server 2008 R2 Datacenter edition, SP1, patched to May 2012 standard, aka The Windows Server 2008 R2 image available in the Windows Azure Virtual Machine preview and installing SQL Server 2008 R2 Developer edition onto it. Now, I don’t think they are related, but I am not ruling it out that there is an issue in some way with that image. Since I am not doing anything but starting the image and running the installation which I have previous downloaded and un-packed from its ISO on a separately attached data drive.

The error I am getting is this:

The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail. (Exception from HRESULT: 0x800736B1).

I tried several times, and often this error would occur during install, but on the fifth (or so) attempt the install was successful and all looked to have installed fine, until I tried opening SQL Server Management Studio (SSMS). And got the exception there instead.

Now following the instructions in the exception text I did two thing, first – check the event log, where I found this:

Activation context generation failed for "C:Program Files (x86)Microsoft SQL Server100ToolsBinnVSShellCommon7IDESsms.exe".Error in manifest or policy file "C:WindowsWinSxSmanifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest" on line 0. Invalid Xml syntax.

Now looking through the event log I could see that I got this error for a number of other applications and services as well, and that ssms wasn’t alone in this.

Next, I ran sxstrace, ie (from an elevated command prompt):

sxstrace trace –logfile:trace.log

The I tried to start ssms to produce the error, which it did. So then I ran:

sxstrace parse –logfile:trace.log –outfile:trace.txt

(More on the sxstrace tool here).

The trace file, among other things, gave me this (similar) information:

INFO: Parsing Manifest File C:WindowsWinSxSmanifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest.
    INFO: Manifest Definition Identity is (null).
    ERROR: Line 0: XML Syntax error.
ERROR: Activation Context generation failed.

This file is from the Visual Studio C++ 2005 Service Pack (SP) 1 Redistributable Package. So I proceeded to download and install both the original 2005 Redistributable (x86, x64) and SP1 (x86, x64), hoping that would fix the problem and correct the manifest file. Not so for me.

I still wanted to see if the error could be fixed by “normal” procedures so I ran System File Checker (SFC). It produced the following result:

sfc /scannow

Beginning system scan.  This process will take some time.

Beginning verification phase of system scan.
Verification 100% complete.
Windows Resource Protection found corrupt files but was unable to fix some of th
em.
Details are included in the CBS.Log windirLogsCBSCBS.log. For example
C:WindowsLogsCBSCBS.log

The log file contain this (snipped somewhat for readability):

Manifest hash for component [ml:280{140},l:152{76}]"x86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1" does not match expected value.
Expected:{l:32 b:43e8b1d9f404eb67105ab15282fd01f5bf4cd30f7f0c5d1250d11e9384ae9cc5}
Found:{l:32 b:d47fec989a9ad0351d4effd5984343181925f15919245da2a0609e1c5d68f280}.
Unable to load manifest for component [ml:280{140},l:152{76}]"x86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1"
[SR] Cannot verify component files for Microsoft.VC80.ATL, Version = 8.0.50727.4053, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope neutral, PublicKeyToken = {l:8 b:1fc8b3b9a1e18e3b}, Type = [l:10{5}]"win32", TypeName neutral, PublicKey neutral, manifest is damaged (TRUE)

At this point I gave up on any form of allowing installers or the system to fix the problem for me and went at the file myself using Advanced guidelines for diagnosing and fixing servicing corruption. The file is readable (although empty), but I cannot edit it (even if I am an administrator). Only SYSTEM has access to the file. So to be able to edit it I must first take ownership of it and grant ACLs:

>takeown /f C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.1833_none_d1c5318643596706.manifest

SUCCESS: The file (or folder): "C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.1833_none_d1c5318643596706.manifest" now owned by user "JEHBTS5Administrator".

>icacls C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.1833_none_d1c5318643596706.manifest /grant administrators:F
processed file: C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.1833_none_d1c5318643596706.manifest
Successfully processed 1 files; Failed processing 0 files

>takeown /f C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest

SUCCESS: The file (or folder): "C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest" now owned by user "JEHBTS5Administrator".

>icacls C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest /grant administrators:F
processed file: C:WindowswinsxsManifestsx86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1.manifest
Successfully processed 1 files; Failed processing 0 files

Now I can edit the file. As for the content I simply took it of another machine in which it existed and did not seem to have any issues. The content is this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!-- Copyright © 1981-2001 Microsoft Corporation -->
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
    <noInheritable/>
    <assemblyIdentity type="win32" name="Microsoft.VC80.ATL" version="8.0.50727.4053" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"/>
    <file name="ATL80.dll" hash="6d7ce37b5753aa3f8b6c2c8170011b000bbed2e9" hashalg="SHA1"/>
</assembly>

After saving the file I am (at least seemingly to this point) rid of the problems.

Azure, BizTalk, Presentation, Sommarkollo

Sommarkollo 2012–The Microsoft Integration Story

Ever updated, The Microsoft Integration Story, in an extended 3h format, joins the lists as one of the available topics in Microsoft Swedens Summercamp (Sommarkollo) 2012. Two stops in Stockholm (27/6, 21/8) and one in Helsingborg (26/6). I hope to see you there.

Enjoy
/Johan

clip_image001

Additional info (in Swedish):

Snart är sommaren här och med den Microsofts uppskattade evenemang Sommarkollo. För tionde året i rad har vi massor av spännande seminarier och produkter att presentera. Delta i så många seminarier du vill – helt utan kostnad! Passa på att träffa oss när vi besöker Stockholm, Göteborg och Helsingborg på vår turné genom Sverige i sommar.

Sommarkollo är ett evenemang för dig som vill bli inspirerad och påläst inför hösten. Du blir väl insatt i nyheter, teknik och annan intressant och användbar information som rör våra senaste och hetaste produkter. Seminarierna riktar sig både till dig som är kund och partner till Microsoft.

Vi kommer bland mycket annat presentera nyheterna System Center 2012, Windows Server 2012 och SQL Server 2012. Vi ger dig även unik inblick på hur Windows 8 kommer se ut och hur du och ditt företag kan arbeta enklare och effektivare med hjälp av Microsofts produktivitetsplattform. Vi bjuder också på flera målgruppsanpassade seminarier för exempelvis utvecklare, it-proffs och säljare.
Välj och kombinera de seminarier som intresserar och passar just dig. Om du bokar både för- och eftermiddagspass bjuder vi på en lättare lunch.

Det här är ett strålande tillfälle att under avslappnade former diskutera, få inspiration och utveckla din kompetens.

Anmäl dig här!

Azure, Learning

To learn or not to learn – it’s about delivering business value

For a developer Windows Azure is an opportunity. But it is also an obstacle. It represents a new learning curve much like the ones presented to us by the .NET framework over the last couple of years (most notably with WF, WCF and WPF/Silverlight). The nice things about it though is that it’s still .NET (if you prefer). There are new concepts – like “tables”, queues and blobs, web and worker roles, cloud databases and service buses – but, it’s also re-using those things we have been working with for numerous years like .NET, SQL, WCF and REST (if you want to).

You might hear that Azure is something that you must learn. You might hear that you are a dinosaur if you don’t embrace the Windows Azure cloud computing paradigm and learn its APIs and Architectural patterns.

Don’t take it literally. Read between the lines and be your own judge based on who you are and what role you hold or want to achieve. In the end it comes down to delivering business value – which often comes down to revenue or cost efficiency.

For the CxO cloud should be something considered. For the architect, Azure should be something grasped and explored. For the Lead Dev, Azure should be something spent time on. For the Joe Devs of the world, Azure is something that you should be prepared for, because it might very well be there in your next project and if it is – you are the one that knows it and excels.

As far as developers embracing Windows Azure I see a lot of parallels with WCF when that launched. Investments were done in marketing it as the new way of developing (in that case primarily services or interprocess communication). At one point developers were told that if they were not among the ones who understood it and did it, they were among the few left behind. Today I see some of the same movement around Azure, and in some cases the same kind of sentiment is brought forward.

I disagree. Instead my sentiment around this is: it depends. Not everyone needs to learn it today. But you will need to learn it eventually. After all… today, a few years later – Who among us would choose asmx web services over WCF today? Things change. Regardless of how you feel about it. Evolution is funny that way.

Because of the development and breadth of the .NET Framework together with diverse offerings surrounding it a wide range of roles are needed. In my opinion the “One Architect” no longer exists. Much the same with the “One Developer”. Instead the roles exists for different areas, products and technologies – in and around .NET. Specialization has become the norm. I believe Azure ads to this.

I give myself the role of architect (within my field). Though I would no sooner take on the task of architecting a Silverlight application than my first pick of on boarding a new member in our integration team would be someone that has been (solely) a Silverlight developer for the last couple of years.

How is Azure still different though? Azure (cloud) will (given time) affect almost all of Microsoft’s (and others) products and technologies (personal opinion, not quoting a official statement). It’s not just a new specialization – it will affect you regardless of your specialization.

You have to learn. You have to evolve. Why not start today?

Azure, cloud

Defining “cloud computing” – in my opinion

As I will begin doing more posts on and around cloud computing in general, and Windows Azure in particular, I’d first like to give my view on when something is cloud and not.

Why? Well, it’s not the first time a word gets status. It gets hot. It gets overused, overloaded and obfuscated. Vendors, consumers, service providers and others might not always agree what cloud is. It will get slapped on, belted down or fuzzily added to an existing product or service to make it more “today”. I might not agree. Others in turn may think I’m wrong. It will add to the overall confusion. So, to hopefully help provide clarity (but potentially adding to the confusion), when do I consider something to be “cloud”?

There are a couple of characteristics that I would look for when it comes to cloud.

Elastic

Or On-Demand. I would assume that I could scale up and down. At any time. I would assume that the procurement process for another server, piece of service (accounts, users, databases etc) is immediate (or next to). Same when scaling down. I would expect to be able to manage this elasticity myself.

Elasticity is not “I have a cold stand by server I can bring online”. Elasticity is “I need 10, 20, or 100 new instances and I need them for two days”. The dynamic capacity does not have a defined limit.

Pay-per-use

I would expect that the service uses some kind of pay-per-use charge model. How I use the service would be measured. How many hours have I been using it? How many GB of storage? How many MB transferred? How many connections opened? How many customers on boarded? How many users? – That sort of thing.

I would hope not (and this one is on the fence) to have to pay for underlying software in the form of procurement or running licensing. It would be included in the service. However this very much plays to the Software-as-a-Service or Platform-as-a-Service (PaaS) rather that say Infrastructure-as-a-Service (IaaS). In the latter I would of course have to handle licenses myself.

Hardware agnostic

I would expect that I do not need to care about underlying hardware. I wouldn’t need to know the cost of purchasing it nor how it is set-up or configured. I wouldn’t need to specify how my machine is built. Even if I do choose the size of the machine, that’s not really my machine, and I can change that at any time.

I would expect the environment, as far as the service or servers go, to be fault tolerant. If hardware fails, or some service needs to be performed, I would assume it to be transparent to me and not effect me or my service.

Summary

If someone calls their offering a cloud service, and it does not fulfill these things I would think twice before considering it a cloud offering. The service offered might be just what I want and need, but I wouldn’t consider it “cloud”. Windows Azure fulfill all these.

This is not an exhaustive list of what I would expect, nor of the capabilities or limitations of Windows Azure or any other platform, though it is what I would say raises cloud above hosting.

Addendum

The term private cloud is often mentioned by hosting providers that would like to compete (at least marketing wise) with the bigger cloud offerings like Windows Azure. In my experience, localized as it might be, they fulfill some of the tenants I hold to, but they often fail on things like rapid (and self-serviced) procurement of a new resource (like a server) or on the metered pay per use model – where often you are expected to pay for something based on your time of pre-determined need of availability to that resource – like a month, if not a year.

Also, for the something in the cloud to be usable for a business that will likely have parts of their business, but not all of it, in the cloud I would look for it to be

Secure connectivity

Since the cloud is not on-premise I would expect there to be a solution available for how do I, in a secure manner, connect what I almost certainly still have on-premise with what I have off-premise – in the cloud. I would hope this to be based on some kind of federated security model, and not by leasing a land line, using VPN, connecting to the existing Active Directory domain or setting up a trust. Though I’ll settle for tried and proven solution.

Further posts

Continuing on with additional Azure posts I’ll try and link back to these pillars, if possible. With pure how-to technology posts that might not always be applicable, but keep these base concepts in mind anyway so that you architect and build your solutions to support them.

Adapters, Azure, BizTalk, Mesh

Test run of the BizTalk Adapter for Live Mesh

The Live Mesh adapter is part of the recently released BizTalk Azure Adapters SDK 1.0 July CTP. It was first shown at TechEd 2009 North America by Danny Garber. But as someone who didn’t attend the conference, it passed me by. It didn’t surface for me until it made it’s way to codeplex and Richard Seroter made a note of the fact a week ago or so. If you have TechEd Online access you can find the original video demoing it here.


How to use it



  1. Register for an account on Azure services development portal, if you don’t have one already. Specifically you want access to Live Services.

  2. Install the pre-requisites.


    • The WCF LOB Adapter SDK.

    • The Live Framework SDK. You don’t need the tools, but you need the SDK.

    • There might be other pre-requisites though that I already had in place. So your experience might vary.

  3. Install the adapter. If you want to install it on a 64-bit system you can download the installer from codeplex. At the time of this writing a 32-bit (for VPCs etc) installer isn’t available, so you need to get the sources and build the installer. However see my install note 3 below, as you have to make a small change to the installer for it to work.

  4. Configure the adapter in BizTalk using the WCF-Custom adapter, or use my mesh.bindinginfo.xml as a template (don’t forget to change to your username and password). Some notes here:


    • You have to give it an URI of “mesh://?actions=LiveFX/OnReceiveFeed“, it’s hard coded in the adapter. If it isn’t given that then it does nothing. To make it unique and be able to have more then one mesh receive location you can add to it, like you would a querystring. It will be accepted, but not handled, only the value of actions is retrieved and used by the adapter. A smart thing here would be to move something into the URI that would make it naturally unique. When you configure other adapters it’s the name of a service, or a procedure or something like that. In a scenario where you are listening to a mesh folder for example, then the name of that would be appopriate, or when listening to notifications from an application, then the name of that; the name of the MeshObject is perhaps a common denominator?






    • image image image

    • At the moment it receives notifications about changes to all resources and then when it gets a notification retrieves all feeds for that MeshObject and tries to read the userdata of the DataEntries associated with each DataEntry as a string, and if it succeeds initializes an XmlReader over that string and creates a message. Thus the userdata must conform with the rules of xml and also a couple of other things built into the adapter today. Like the name of the first node, and an attribute in that node must match those in configuration of the adapter, as must the title of the MeshObject. The attributes value must be equal to the name of the DataFeed that contains the entry which we received a notification for. 
      The logic is a little unclear to me, and has lots of room for improvement, but admittedly – I’m no expert on the mesh resource model, nor do I know what the application it was built for sent across or how it stored it – and, it’s an early CTP. Sample (matching the config seen above, and produced by my sample app):
    • <myXmlElement myFeedNameXmlAttribute=“MeshAdapterFeed”>This is my data</myXmlElement>

  5. Build a cloud app to generate notifications with proper userdata content, or download my meshadaptertestconsole.zip (don’t forget to change to your username and password), which corresponds with my bindings above. It creates a MeshObject, a corresponding DataFeed and a DataEntry in that with userdata. In the Live Framework Resource Browser (an invaluable tool that comes with the SDK) it looks something like this:
     image

Conslusion


First thing to note about the adapter is that at the moment it seems very tightly coupled to the demo. Judging from the “vision slide” at codeplex, the idea is for it to eventually expand into other areas as well. It isn’t to hard to change it or update it to for example read files out of a mesh folder. The Live Framework is pretty straight forward once you get used to it. But right now it’s just a glimpse into a future where BizTalk is the server product to bridge on-premises with the cloud – and doing so effortlessly and seamlessly with the use of the artifacts that we as BizTalk Developers are used to.


Install notes


Install note 1:
If the adapter install complains about not being able to find Microsoft.ServiceModel.Channels you haven’t got the WCF LOB Adapter SDK. The exception message (for completeness and search-engines, was: “Could not load file or assembly ‘Microsoft.ServiceModel.Channels, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ or one of its dependencies. The system cannot find the file specified.“. 
image


Install note 2:
If you get a message when running the installer that “Please wait while the installer finishes determining your disk space requirements” – try to run the installer from the commandline using the command msiexec /i <name_of_installer.msi>, or msiexec /package <name_of_installer.msi> /qr.


Install note 3:
I got the message that “Machine.config doesn’t contain system.servicemodel node”. First of all – yes it did, secondly – I could see that the adapter had added it’s bindings to the section – so it must be due to something else. When “binging” (do we call it that now 😉 the exception message I got this post. It didn’t help much, but it did point me in the right direction, becauseit made me remember seeing a custom action in the solution. Sure enough, looking at the code for the custom action revealed the issue. At line 106, the installer is trying to apply the config for the 64-bit machine.config as well as 32-bit, not finding it since I’m on a 32-bit environment. Commenting out that line and rebuilding the install does the trick. Note: I’m not trying to do a fancy works-for-all-scenarios solution. I just wanted to fix my specific problem.


Install note 4:
I can swear on the fact that just before beginning the process of installing the adapter I could create BizTalk projects. After the adapter (and WCF LOB Adapter SDK) was installed – I can’t. Project creation fails and I get the message “One or more templates do not match any installed project packages.”. I can’t really point a finger or lay the blame on any particular point, install, configuration or other entity. It did however stop working. Reinstalling BizTalk Server 2009 Developer Tools and SDK resolved the issue.