Blog

Adapters, BizTalk, BizTalk Server 2010, Pipeline Components

Threading issues in WCF-Adapter Body Path

(I originally wrote this the 5th of May 2014 but did not publish it. I only sent it as internal feedback at the time. I am now choosing to publish it anyway since there has now been ample time for Microsoft to fix the problem)

The scenario

I am using the BizTalk Server WCF adapters in a solicit-response send port and I am getting the response using the Body Path configuration option.

The problem

Main issues that using the WCF Adapter – Messages tab – Inbound BizTalk message body – Path, configured with an XPath and String Node Encoding configuration options produces under load:

  • Deserialization fails producing random and irregular errors seemingly caused by the stream position jumping to an erroneous location.
  • The adapter returns the wrong mismatched response (!!!) back to the pipeline

For the first issue, the following is the error message (or an example of):

System.Xml.XmlException: Start element ‘To s:mustUndersta’ does not match end element ‘sendMessageResult’. Line 1, position 951.

If you catch it and look at the XmlExceptions call stack, you will find this:

at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)
at System.Xml.XmlUTF8TextReader.ReadEndElement()
at System.Xml.XmlUTF8TextReader.Read()
at System.Xml.XmlSubtreeReader.Read()
at Microsoft.BizTalk.Adapter.Wcf.Runtime.BinaryReaderStream.ReadContentAsString(Byte[] buffer, Int32 offset, Int32 count)
… (call stack continues with custom code)

For the second issue, you will NOT find anything wrong in BizTalk. It will simply associate the wrong response with the wrong request and return the wrong message! This is far worse than the first exception in my opinion since it effectively, successfully and quietly sends back someone else’s response to a caller. When examining it closer it does not appear to swap responses, it just hands one response back to more then one caller, and the other responses simply disappear and are never used. BizTalk will not discover this (at least not in my case where I am always getting the same response MessageType back from the port), only whatever application logic that are either built into BizTalk by you or the system or application calling BizTalk or human intervention might discover what has happened. Perhaps when getting the wrong data back.

Guide to reproduce

Steps (that I took) to build up and test a solution to reproduce the error:

  1. Download the Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) Samples for .NET Framework 4, http://www.microsoft.com/en-us/download/details.aspx?id=21459 (this is not required, though it gives us a simple starting point.
  2. Use the Self-Host sample found at …WCFBasicServicesHostingSelfHostCS and described at http://msdn.microsoft.com/en-us/library/vstudio/ms733765%28v=vs.100%29.aspx. Basically this is the Calculator sample hosted in a Console app (this is not required for repro, it is just for simplicity).
  3. Test it to make sure it’s working.
  4. Extend the Client somewhat to get a bit more simultaneous and multi-threaded. I will show this later but essentially I am using the Calculator Add method and from several threads to create some load.
  5. Test it again and see there are no errors.
  6. Create a receive port and receive location, as well as a send port in BizTalk and route the message through so that BizTalk is between the client and the service.
  7. Test it and make sure it’s working (at this point I still have no issues).
  8. Now consume the Calculator service metadata in a BizTalk project to create the schema for the Add, Subtract, etc methods and their responses. Create a simple BizTalk flat file schema (we need something to store the XPath string we extract), create a pipeline with the Flat File Disassembler configured with that schema (we need to disassemble the string we receive from the adapter into that schema), and then a Map mapping from that schema to the original Calculator service schema and the AddResponse message (we want the client to get the correct response back).
  9. Compile, Deploy and wire everything up inside Biztalk (ie re-configure the ports created to use the receive pipeline we created, and the map we created.
  10. Also configure the send ports WCF Adapter – Messages tab – Inbound BizTalk message body – Path,  with an XPath and String Node Encoding.
  11. Run a single message through to make sure everything works. In fact, run it non-multi-threaded with several messages and make sure it works (which for me it did).
  12. Now run multiple threads to create a load on the system.
  13. Watch as the previously described errors appear. At first it works fine, then I begin getting responses I did not expect (2+2 = 8?) and then I run into the ‘Start element ‘…’ does not match end element ‘…’ exception.

An in-depth look

The service

There is nothing to see here really. It’s your most basic WCF service, same as it comes with the sample (slightly abbreviated in the sample below).

Code

namespace Microsoft.ServiceModel.Samples
{
    // Define a service contract.
    [ServiceContract(Namespace="http://Microsoft.ServiceModel.Samples")]
    public interface ICalculator
    {
        [OperationContract]
        double Add(double n1, double n2);
       ...
    }

    // Service class which implements the service contract.
    // Added code to write output to the console window
    public class CalculatorService : ICalculator
    {
        public double Add(double n1, double n2)
        {
            double result = n1 + n2;
            Console.WriteLine("Received Add({0},{1})", n1, n2);
            Console.WriteLine("Return: {0}", result);
            return result;
        }
        ...

        // Host the service within this EXE console application.
        public static void Main()
        {
            // Create a ServiceHost for the CalculatorService type.
            using (ServiceHost serviceHost = new ServiceHost(typeof(CalculatorService)))
            {
                // Open the ServiceHost to create listeners and start listening for messages.
                serviceHost.Open();

                // The service can now be accessed.
                Console.WriteLine("The service is ready.");
                Console.WriteLine("Press <ENTER> to terminate service.");
                Console.WriteLine();
                Console.ReadLine();

            }
        }

    }

}

Config

This is basically the same as the sample, the only exception is that I removed security (simply because that’s what the scenario I was testing had).

    <bindings>
      <wsHttpBinding>
        <binding name="wsConfig">
          <security mode="None" />
        </binding>
      </wsHttpBinding>
    </bindings>
    
    <services>
      <service name="Microsoft.ServiceModel.Samples.CalculatorService" behaviorConfiguration="CalculatorServiceBehavior">
        <host>
          <baseAddresses>
            <add baseAddress="http://localhost:8000/ServiceModelSamples/service"/>
          </baseAddresses>
        </host>
        <!-- this endpoint is exposed at the base address provided by host: http://localhost:8000/ServiceModelSamples/service  -->
        <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" bindingConfiguration="wsConfig"/>
        <!-- the mex endpoint is exposed at http://localhost:8000/ServiceModelSamples/service/mex -->
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
      </service>
    </services>

    <!--For debugging purposes set the includeExceptionDetailInFaults attribute to true-->
    <behaviors>
      <serviceBehaviors>
        <behavior name="CalculatorServiceBehavior">
          <serviceMetadata httpGetEnabled="True"/>
          <serviceDebug includeExceptionDetailInFaults="False"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>

The client

Here I have added some multi-threading aspects on top of the sample code.

Code

namespace Microsoft.ServiceModel.Samples
{
    //The service contract is defined in generatedClient.cs, generated from the service by the svcutil tool.

    //Client implementation code.
    class Client
    {
        static void Main()
        {
            Console.WriteLine("Press ENTER to start!");
            Console.ReadLine();

            List<Task> tList = new List<Task>();

            Action a = () =>
            {
                Random random = new Random();
                int mseconds = random.Next(1, 100) * 10;
                System.Threading.Thread.Sleep(mseconds);

                // Create a client
                CalculatorClient client = new CalculatorClient();

                for (int i = 0; i < 10; i++)
                {
                    double result = client.Add(Convert.ToDouble(i), Convert.ToDouble(i));
                    double localResult = Convert.ToDouble(i) + Convert.ToDouble(i);
                    if (result != localResult)
                        Console.WriteLine("{0} !!!!!!!!!!!!!!!!!!! {1}", result, localResult);
                }

                //Closing the client gracefully closes the connection and cleans up resources
                client.Close();
            };

            for (int i = 0; i < 100; i++)
            {
                Task t = new Task(a);
                t.Start();
                tList.Add(t);
            }

            Task.WaitAll(tList.ToArray());

            Console.WriteLine();
            Console.WriteLine("Press <ENTER> to terminate client.");
            Console.ReadLine();
        }
    }
}

As you can see it does not take much, that is – I am not running thousands of threads here, in fact in total I only make a thousand calls, and I get an error every time I have run the app so far.

Let’s examine this a little bit closer. As you can see I am doing the following.

double result = client.Add(Convert.ToDouble(i), Convert.ToDouble(i));
double localResult = Convert.ToDouble(i) + Convert.ToDouble(i);
if (result != localResult)
    Console.WriteLine("{0} !!!!!!!!!!!!!!!!!!! {1}", result, localResult);

That is, I am sending the addition of the two numbers to the service, and the doing the same calculation locally. Then I compare the result I get from the service with my local result. If they do not match, I print the fact.

Config

The clients config (by necessity) matches that of the services. It is available in the download but I am not including a dump of it.

The BizTalk solution

To be able to reproduce the problem in a portable manner I created a simple BizTalk solution.

Visual Studio Solution

At this point I have got three projects in Visual Studio, of which my BizTalk project is the first in the below screenshot.

image

It contains the following artifacts:

  • CalculatorService_microsoft_servicemodel_samples.xsd – the schema for the Calculator service that was automatically generated for me (I removed the other generated artifacts as they were non-essential for the purpose of this repro).
  • FFAddResult.xsd – a simple flat file schema that will act as the temporary carrier of the result after I extract it from the incoming xml response until the Map transforms the message back into an AddResponse message.
  • Map1.btm – a map that transforms from FFAddResult to CalculatorService_microsoft_servicemodel_samples#AddResponse.
  • ReceiveFFAddResultPipeline.btp – a disassemble pipeline that will take the incoming string and create an FFAddResult xml message.

The FFAddResult.xsd is extremely simple.

image

As is the Map1.btm.

image

BizTalk Server configuration

In BizTalk I have a receive port and a receive location with very simple configuration; PassThruReceive and PassThruTransmit pipelines, WCF-Custom WS-Http binding (so I can host it in BizTalk and have fewer moving pieces – not essential to the repro, just easier to move around).

image

Everything is according to default configuration except no security.

image

No behaviors, no special parts of the message extracted here or non of that (the xpath is on the send port receive).

Speaking of which, the send port (in the configuration that give me issues) is a solicit-response port running the WCF-WSHttp adapter (to be frank, I have not tested other adapters/bindings) and running the receive pipeline we created.

image

Receive pipeline has trivial config.

image

We have our inbound map configured.

image

And a filter on the receive port name that I won’t include a screenshot of.

The adapter configuration is set to forward the call to the backend service add method.

image

It also has security set to none (not shown in screenshot) as well as on the Messages tab having the Inbound BizTalk message body – Path,  set to an XPath and String Node Encoding. the xpath in this case is /*[local-name()=’AddResponse’ and namespace-uri()=’http://Microsoft.ServiceModel.Samples’]/*[local-name()=’AddResult’ and namespace-uri()=’http://Microsoft.ServiceModel.Samples’%5D, which will forward the string value inside the AddResult node for the pipeline.

Executing the failing code

Below is a screenshot of the app running, where as you can notice, the response is not as expected.

image

As mentioned before, the app gets a number if these mismatched erroneous responses and then crashes (due to the previously mentioned exception and the sample apps lack of exception handling).

Here is also some sample output from the service (in this case the screenshot is from a test where it has been able to run to completion and what we are seeing are the final tasks/threads doing their last calls).

image

The workaround

Ok, now that we are aware of the failing component. Can we work around it?

Enter the XPathExtractor pipline component, based of some of BizTalks hidden gems.

Essentially, the solution is – don’t use the BizTalk message body – Path option. Instead, just get the body and extract the XPath you want in a pipeline component. Although I have not done any performance comparisons but I am not expecting the code to have any noticeably difference in performance. You might want to think twice before running very large message through it (as will become evident in the source code included later). But I think the same is true for using the adapters Body Path option, which (as we can see from the callstack the exception had) also gets the content of the node as a string.

I created a new port and a new pipeline, getting only he body in the adapter and instead extracting the string in the pipeline. Configuration as follows.

image

The Execute method of the pipeline component does this:

Stream stream = new ReadOnlySeekableStream(pInMsg.BodyPart.GetOriginalDataStream());

XmlTextReader xmlTextReader = new XmlTextReader(stream);
XPathCollection xPathCollection = new XPathCollection();
xPathCollection.Add(this.XPath);
XPathReader xPathReader = new XPathReader(xmlTextReader, xPathCollection);

bool matchFound = false;
while (xPathReader.ReadUntilMatch())
{
    if (xPathReader.Match(0))
    {
        string val = xPathReader.ReadString();
        stream = new VirtualStream(new MemoryStream(System.Text.Encoding.GetEncoding(this.Encoding).GetBytes(val)));
        matchFound = true;
        //break; // don't break, read to end to play nice with other components 
                    // that have a streaming approach and might want to process the full message
    }
}

if (!matchFound)
    throw new Exception("xPathReader.ReadUntilMatch() found no match");

stream.Seek(0, SeekOrigin.Begin);
pInMsg.BodyPart.Data = stream; 
pContext.ResourceTracker.AddResource(xPathReader);

I have run this a number of times and never as of yet gotten the same exception.

Remember that if you have any objections on the code in this component (like why is he adding the xPathReader to the resourceTracker, or whatever else) that this is not used when I get the issues, this is used to get around them.

Quick performance comparison

Quick and dirty indeed. This has no intention to go deep or be thorough. With that out of the way…

Without Adapter Body Path

Red is proc. Blue is message publishing rate. Green is memory (not sure I got the right counter for that one, but I’ll ignore that for putting together this post).

image

With Adapter Body Path

image

It’s not really applicable to performance measures because it fails quite early. The most interesting thing with this graph I would say is that I don’t have to reach higher than 4 simultaneous request to start getting issues.

With XPath extraction in pipeline component

image

Also note that due to the randomness of my Task execution this may not be altogether comparable, so just keep that in mind. I post it mainly to illustrate that I am not running a really massive load, to show how early it fails, and to note that the workaround is not terrible.

My environment

My repro environment is a Hyper-V virtual machine with 4608MB allocated running alone on a 8 core host machine. It’s BizTalk Server 2010 with CU6, Windows Server 2008 R2 fully patched, SQL Server 2008 R2 SP1 (yes, I am aware there is a later service pack, but at the time it was not applied. I would be very surprised if it makes a difference). I have reproduced this error on several environments so it’s not isolated to mine. Although I am running BizTalk Server 2010 I would not be surprised to find this in BizTalk Server 2009, BizTalk Server 2006 R2 or even BizTalk Server 2013. Though I can neither confirm not deny since I have not tried.

Download

The full source code and bindings for everything I have mentioned in this article, including the code for the pipeline component, is available for download here.

Uncategorized

My old joheb.spaces.live.com blog

This is post from my old blog on joheb.spaces.live.com (at the time it wasn’t called live.com by anyway…) that is marked as private and I no longer have access to. Please ignore links and such, since chances are good they aren’t working and I basically just copy/pasted. Most of the content is me (us) reporting back from TechEd Amsterdam 2005.

2005-12-15

Internet Explorer Developer Toolbar

Ever wanted to see your table layout? You did it by adding border=1? Ever wondered about the size of an image? Ever wanted to measure a part of a page in pixels? Ever wanted to browse the DOM of a page through a treeview like structure? For these feaures and much more check out the Internet Explorer Developer Toolbar.

07:28 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-26

Need help in VS.NET? Google it!

Some of use either can’t afford/dont have avalability to MSDN, and some of us simply dont want it even though they can, for some strange reason (*hrm*Halvan*hrm*). I sometimes even find myself going to google as well as oppsed to the MSDN help many times. I found this macro by accident that’s just perfect for those times. It adds a simple macro to Visual Studio that allows you to use the text you have highlighted and search for it in google. It’s a very simple macro, you could easily extend it should you wish, it just goes to show what useful tricks the IDE can do. Get it here (oh, and don’t miss the link to the updated version at the bottom).

16:23 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Get moblogging

Going on vacation? Doing anything interesting that you would like to share? Start moblogging. Moblogging is short for mobile phone blogging, the ability to create a blog post, with an optional picture, directly from your e-mail enabled cell phone. Check out this article.

09:15 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Resa

2005-07-21

AJAX

AJAX (Asynchronous Javascript And XML) seems to be a hot topic right now. It’s really not a new technology but as browser support has increased and Microsoft has declared its intentions to include additional support for it in ASP.NET 2.0 with the Atlas initiative interest has increased.

DevX featured three articles about it in its last newsletter, 1 2 3 (might require registration).

There is a groktalk available here.

And there is a recent article on MSDN here.

I’am sure there are more sources of information out there, this is not an attempt to list the best resources, it is simply an example of recent activity around the subject.

10:02 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Groktalks

Do you grok it?

As good as it can be to listen to a webcast like the ISA Server Technical Overview, it can be a bit tedious, it’s simply so looooooong. 1:38 total for me is a long time to sit with your headphones on listening to a webcast. It also takes alot of your time away from other things, regardless if you do it on your spare time or as part of your job. Enter Groktalks.

Groktalks are short talks of roughly 10 minutes presented by Microsoft Regional Directors. Now you can’t really present the same amount of content in 10 minutes as you can in 2 hours, that’s not the point though. The point is: when can you not spare ten minutes? The format, for me, is perfect. Taking the ten minutes out of a day to listen to some interesting topic, that’s possible almost every day, and it’s a great way to learn something new about topics of interest. So go check out groktalks.

09:58 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Webcast: ISA Server 2004 Technical Overview

TechNet Webcast: ISA Server 2004 Technical Overview (Level 200) 1h 38min.

Presented by Keith Combs

This is a good introductionary webcast. And since this an overview of the entire product most areas of it will be covered but for details of areas there are later webcasts that cover specific areas more specifically. There is a whole series of webcasts about ISA Server here.

The Agenda of topics that are looked into looks like this:

  • Improvments over ISA Server 2000
  • Exploring the interfaces, wizards etc (which are very wysiwyg, drag and drop like)
  • Filtering and firewall policies
  • Publishing web and mail servers
  • Enabling and configuring vpn
  • Viewing and configuring monitoring and alerts

The short product description

  • ISA Server is a firewall
    * Protect resources
    * Screen traffic.
  • ISA Server is a proxy
    * Act as a proxy for web, email etc servers,
    * Caching

Common deployment scenarios

  • Edge Firewall
    * Caching, authentication, VPN
    * Integrated ssecurity solution
  • Secure Publishing
    * Exchange, Sharepoint and IIS
  • Branch Office
    * Site to site secure connectivity
    * Remote site security
  • Remote Access
    * Flexible and powerful policy
    * Quarantine

Resources

If you want to get more hands on experience without actually installing the product check out the virtual labs.

A good (unofficial, non-MS) resource site is isaserver.org.

09:46 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-19

Webcast: Being more prductive with the .NET framework

[DEV325] Being more productive with the .NET Framework

Presented by Juval Löwy, IDesign

This webcast represents one of the seminars that I didn’t attend while at TechEd. There was something more interesting at the same time. I still found the topic interesting enough to catch the webcast. This seminar is a combination of tips on how to do things in the tool itself and how to do some things programmatically. Some things are really useful tips, others are more cool but not often used features (like the opacity demo). This is the abstract.

WinCV – available on your box with Visual Studio 2003. Displays an header file like type information. That is it shows the class definition, not implementation. You can customize the assembly list in which it searches through its config file. The tool can be useful in getting to know the assemblies of, especially large, projects. In VS.NET 2005, it’s built into the tool itself. Whenever you do go to definition, and the location isn’t available as a source file, you will get a WinCV like view.

WinDiff, if you need to compare files, which you often do, get it from the Visual Studio 6 disc. It is far better then VSS’s difference tool, and there is no now tool available.

Windows sliding in VS.NET 2003. Tools – Options… – Environment – General – Animate environment tools – Speed, or you can turn it off. Guaranteed to save you some time. I changed this one immediatly, I mean, how often haven’t you waited to get the window to close.

Multiple startup projects, nothing new. You can have multiple startup projects in a solution so that when you press F5 to run both projects will start. Useful for example in a client/server remoting scenario or the likes.

One file for many projects. Instead of Open, use Link File. I haven’t even noticed this option before, cool. Normally when you add an item to a project you go to Add – Add New Item and select the file you wan’t and click Open. This will make a copy of that file into the solution folder should it be located elsewhere on disk. In the open dialog however, on the Open button, there is a small arrow to the right hand side. If you click that you have some more options, one of them being Link File – allowing you to use the file from it’s current position. Having used alot of Visual SourceSafe we most often did this by mean of a shared file in VSS. So when you checked your file in it altered both locations and when you did get latest the next time you got the new version in both locations. There are still uses for that I’m sure, but this is another and sometimes better way to do it (certainly when you are not working with VSS).

Creating directories for solutions not placing the sln file in a project folder, expand More in the New Project dialog.

It’s very easy, and underused, to add custom tools to the Visual Studio menu.

Treat warnings as errors checkbox. Warning level 4 (all the warnings the compiler can give you). The seminar speaker felt that you always have this checked (always treat warnings as errors). The reasoning behind this being that you should never ship anything to which the compiler has given a warning. I know for a fact that this isn’t always the case. You certainly don’t always wanna do this because the compiler can sometimes warn on very useful scenarios. But, none the less, I agree with the speaker, in most cases.

Oh, and in this seminar someone finally said what I had been waiting for to hear for a long time, but that I already knew. The drag and drop database connectivity features of Visual Studio for Windows Forms is not for use. “Anyone caught using them should be fired“. You should always seperate database connectivity, business logic and user interface. However, there is a trick that allows you to use this drag and drop feature on a class file. If you temporarily let your class inherit from Component then you can drag and drop items to it and they will be set up correctly. You can then remove Component and it will still work. I don’t see this as being very useful in most case, but there may be situations where you want to have rapid development while still adhering to some best practices that it might come in handy.

Rectangular selection, using the alt key. No news, but a useful reminder.

Poor mans configuration editor. View – Other Windows – Document Outline. Useful for config files and even for XML files. Makes the files easy to navigate.

Simular to when you wrap public variables in properties you can wrap public event variables in event accessors, because of the same arguments as to why you wrap variables. It is coded like below:

EventHandler _MyEvent;
public event EventHandler MyEvent
{
add
{
  _MyEvent += value;
}
remove
{
  _MyEvent += value;
}
}

Breakpoint filters is an additional feature besides the Conditional breakpoint functionality in VS.NET 2005. It can be very useful (mostly in threading scenarios), but it is off by default.

Coding Standards – light on the “why”, heavy on the “how”. Industry standard coding standard at IDesign.net. The speaker is of the meaning, as am I, that you should always have a coding standard. He suggests that a 20-25 pages document is sufficient. The document shouldn’t explain why you do something, simply how you do it. First you follow the rules, then you can begin to learn why.

The session also went into Threading and Factoring (Design). For info on those parts, watch the session.

10:04 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-18

TechEd Michelin report

So, with the four crossed utencils (whatever they are), how did TechEd 2005 measure on the stars scale once experienced.

First, a recap. 1 star equals something like, if you are in town anyway, you might as well eat there. 2 stars represent that if you are passing by, or even if you are not passing by, it might be worth a small de-tour to eat there. 3 stars represent a place where eating there can be the purpose of the trip istself, it’s that good.

Now in TechEd terms this would mean something like this.

1 star, if you live in Amsterdam (or nearby) and won’t have to pay for the trip or admission, it would be worth your time to attend.

2 stars, if you live nearby, or have other business in the area, it would be worth the money to attend TechEd since you’re gonna be around anyway.

3 stars, you consider it worth both the trip, your time, and the (outrageous, considering Microsoft is advertising themselves, helping themselves to more sales) admissions fee.

Now given that scale, I’d have to give it a 3. Granted though there are things that could make it better, all of which I mentioned in my post conference evaluation (Give me that X-Box!), but if I’d have to make a choice of attending TechEd or a couple of weeks of “ordinary” education I’d choose TechEd every time. And I definitely want to return next year if I can.

There are however a few things I would do differently next year. Don’t cram that many sessions into the schedule, take the time to do more hands on labs. Gather more questions from the organisation that you can ask the experts while there at (first) hand. Don’t be so over ambitious with the reporting unless absolutely necessary, doing reports like the ones we did now takes alot of time. However I would still try to spend all waking hours that the conference is open at the conference, try to take in as much as I can. Looking back it has been an extremely packed week, but that’s the way I like my weeks.

15:08 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

TechEd Ask the Experts

There was part of the Exhibition hall where Microsoft had teams manning small stands representing products, like Sharepoint and CMS, or SQL Server. I had three questions in particular for the experts:

The first question concerned Sharepoint and the ISAPI filter we have built for one of our customers. The solution is a custom authentication solution. In short it checks to determine if the user should be logged in as a guest user or if the user should get the default login dialog box. In effect enabling guest access to a non anonymous Sharepoint site. On the customers site for whom it was developed, it works perfectly in all areas of the site. On a default installation it works until it comes to displaying or retrieving the list, or items from, of type document or image library. Then for some reason, when running as the ISAPI supplied guest user, it will still display the dialog box.

The official answer however was one we had heard before, sadly.

– Use ISA server. What you are doing is not supported.

Unofficially the answer was that they had no idea as to what might be causing it and had never run into that specific problem before (but since it was unsupported they hadn’t tried so…). The recommendation was that if this was important for us we could try opening up a support case for it, to see if Microsoft would be willing to examine the problem more closely. Status quo since before.

The second question concerned CMS and use of an external authentification authority. When using an external source for validating the user, how would you use that in combination with CMS.

The answer to this was also nothing new. Identify the user with the external source but bring back some kind of identification for the user, like a role. Then have a user account with which you can authenticate users with that role.

The problem with this solution would be when there isn’t a 1-1 relationship between a user and the roles he has. If a user has two roles that aren’t just building upon each other, ie the editor of some areas and the reader of other while still not being able to access additional areas. This could lead to complexity, and many roles. Sufficient to say that CMS wasn’t built with this in mind.

The third question was about SQL Server and lock escalation. Will SQL Server escalate locks at some point even though a locking hint like with(rowlock) has been specified. The answer is yes. When locks use “enough” of the memory (there is no absolute number of locks that triggers it) SQL Server will escalate the lock. It will also need to escalate the lock if you are updating columns in a query for which there is no good index. This KB discusses lock escalation.

Review of Ask the Experts

Overall the availability of the people manning the Ask the Expert booths were good. Their willingness to help and their knowledge in the subject area was equally good. However I didn’t really get any of the answers I wanted to, but that’s flaws in the products themselves, or in our design, not of the people giving them.

12:19 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

TechEd Day 5 Summary

Friday.

If nothing else is noted we both attended a specific session, if we went seperate ways, and we did, a character (J) or (M) will denote who took the specific session.

  • Windows Forms: An In-Depth Look at Windows Forms in Visual Studio 2005 (J)
  • What’s New in Web Service Enhancements (WSE) 3.0 (M)
  • Together at Last: Combining XML and Relational Data in SQL Server 2005
  • SQL Server 2005 Security for Database Developers (J)
  • Advance Orchestration Design Using Biztalk Server (M)
  • Client and Middle Tier Data Caching in SQL Server 2005
  • MSF v4: What’s New and Old in Microsoft Solutions Framework v4 (J)

Since this was the final day it ended a bit earlier then previous days. Although the week had been good, it felt nice to be on your way home again. However, KLM was going to stop that. More on that later.

09:59 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

MSF 4.0

[DEV250] What’s new and old in MSF 4.0
Presented by Rafal Lukawiecki, Project Botticelli

Alot is still the same from MSF version 3.0 (ie 70-300 is based upon 3.0).

History

1991, 1993 version 1 came. 1998 version 2 came with the help of the field. 2003 a thorough rework was made resulting in version 3. MOF is related, but it handles different parts if the software lifecycle. They do however overlap. MSF if for development and MOF for maintenance/operations. MSF is not only created by Microsoft, there is a MSF Partner Council. There are two flavors: MSF v4 Agile (currently in beta) and MSF v4 Formal (CMMI) (also beta).

So why does projects fail and why do we need frameworks?

Figures in year 2000 said that out of all projects: 23% totally fails, an additional 49% are considered failure, only 28% are considered successful. If you do a car comparison: 1 in every 4 cars immediatly kills the driver, 2 out of 4 cars leaves the driver so injured he can hardly get out of the car, and only 1 out of 4 cars takes the driver where he wants to go the way he wants to get there.

There are some common root causes of failures. The number one reason is failure to work as a team. “Primadonna programmers” for example can cause big problems for a project. Lack of flexibility in processes is also a major contributor.

Of projects that fail the average is that: they cost 45% more then budgeted, they take 65% more time then allocated, and they result in 67% of the functionality desired.

Does it (frameworks) work (help)?

Yes, but only if you use the relevants bits of the framework (for your project). Projects can use all but it’s not often needed. The project would likely consist of a minimum of 4-6 people if there would be any point of using it all. Version 4 works even for smaller projects. For single user projects there are some aspects that apply to the mindset of a good single developer but it’s when you get involved in team work that it’s really useful.

What is a framework?
Formally: A set of conceptual tools and best practices.

Traditionally MSF is that, but 4.0 is more of a methodology.

Agility
Is the ability to cope with change. The price to pay is that the process is a bit unpredictable (as to how long time it will take for it to finish).

CMMI
Capability-Maturity Model. Handles the predictability of an organisation in terms of their ability to produce quality software.

MSF v3 seems more formal then v4 Agile, but less formal then MSF v4 CMMI.

Extreme programming (XP) is has similarities with MSF, XP is less predictable. MSF v4 Agile is more agile while still more predictable and controllable then XP. MSF v4 CMMI is the most predictable, but naturally less agile then XP.

Key MSF Concepts

v3 Concepts
-Disciplines
  -Project Management
  -Risk Management
  -Readiness Management

-Process Model

  -Envisioning

  -Planning

  -Developing

  -Stabilizing

  -Deploying

-Team Model
  -Program Management

  -Product management

  -User Experience

  -Development

  -Test

  -Release management

-Design

  -Conceptual, Logical, Physical design. UML was a big part.

v4 Concepts

-Project management
  -One of the most powerful features of MSTS is it’s automation of project management.

-Process model

  -MSF v4 removes the process model, instead there is now “Governance Checkpoints” along “Tracks”.

    -The steps involved are however still basically the same.

  -Daily builds – the hearbeat of the development process.

-Team – Expanded

  -Program Management

  -Product Management

  -User Experience

  -Architecture

  -Development

  -Test

  -Release / Operations

-Design
  -UML is considered a little more old fashined for todays software. UML2 might be better but, in the meanwhile, Microsoft has released Domain Specific Languages (DSL).

The Team Model is scalable.

Future
Since MSF is now in the tool (Visual Studio 2005)  it has a illustrative (good) future.
The future of MSF for infrastructure is however a bit uncertain.

There is Forum available to discuss MSF v4.

09:31 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Client and middle tier data caching

[DAT421] Client and Middle Tier Data Caching with SQL Server 2005

Presented by Michael Rys, Microsoft.

This session was mostly a no-news presentation, but served to firmly reinforce my belief in some best practices. Michael Rys went on to tell us about how caching is performance positive because in-memory lookups are generally faster then queries. It also takes load of the database towards the middle (and client) tiers where scaling out is easier.

Key aspects include what to cache and how (in the type of what object) to cache it. The DataSet is a great option for caching relational data. The DataSet has been improved in regards to lookup speed when indexing, searching and retrieving data from it in .NET 2.0. You could also decide to use a custom object.

There is also the decision of expiration; how long is an item in the cache valid. The “new” features of notifications (when the data that was cached changes) from SQL Server 2005 can help invalidate the cache. In .NET 2.0 query notifications are implemented through the SqlDependency object.

The client caching was presented with the idea of the (disconnected) smart client. When caching locally you can place you data in for example a file, SQL Server Mobile, or SQL Server Express (depending on the scenario and device), but also the full version of SQL Server of course. We then examined the synchronization features available, including replication, RDA (Remote Data Access) and WebServices.

08:32 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-17

SQL Server 2005 Security for Database Developers

[DAT360] SQL Server 2005 Security for Database Developers

Presented by Kimberly Tripp and Rafal Luckawiecki

Just securing your SQL Server 2005 database is not enough, as attacks at the application level are, unfortunately, on the rise. SQL Server application developers need to follow best practices to avoid creating vulnerabilities that risk data theft. In this session we will look at object ownership chains, the impact of user/schema separation, the benefits and potential pitfalls of “execute as” and best practices in authentication. We will also tackle the issues of cryptography-based security from the application’s perspective, as it is easy to misunderstand and misuse these techniques. For example, you may be using a good algorithm, but are still generating your keys using certain weak password schemes.

Unlike many of the other official session summaries this (somewhat shortened) summary is actually very accurate as to the content convered. The sessions presenters worked together very dynamically to make the session easy to smile at while still taking in the information given.

As far as Security goes defense in depth is important; meaning that you should not limit the work done to secure your application to the database, but instead secure all layers in your system, hardware and software alike, to achieve defense in depth. This session however covered the part of the database.

The first thing that one should notice about the changes in the security model in SQL Server 2005 is the introduction of a middle layer, the schema. Some of you having worked with other database systems might recognize this. The schema sits between the user and the objects (tables, functions and procedures) in the way that an object belongs to a schema. A user then gets granted operations on a schema as opposed to objects directly.

A dynimically built string executed with EXEC always runs under the credentials of the caller, this can be changed using the new EXECUTE AS clause. There are four options EXECUTE AS CALLER (which is the default), EXECUTE AS ‘UserName’, EXECUTE AS OWNER, EXECUTE AS SELF.

Different levels of encryption using different key length were discussed and all but RSA 2048 bits and AES/Rijndael. Use of DES, 3DES, DESX, RC2 and RC4 was discouraged due to insufficient security, but of course, it all depends on the type of data you are trying to secure. The syntax to specify encryption at the various levels was examined.

Encryption (security) does infer a performance penalty.

15:14 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Combining XML and Relational Data in SQL Server 2005

[DAT384] Together at last: Combining XML and Relational Data in SQL Server 2005

Presented by Michael Rys, Microsoft

This session briefly presented XML and went on to look at it’s structure, what the scenarios are for using it, how it fits into SQL Server (both 2000 and now in 2005) and how you leverage the new features connected to XML in SQL Server 2005.

One of the reasons why XML support was put into SQL Server 2005 is to provide you with the possibility to use SQL Server as your single repository for data. Instead of placing XML files on the file system you can now place them in SQL Server instead.

Data can be structured using XML in most cases, but for some cases like flat structured data, a relational format is still the better choice.

In SQL Server 2005 the XML support has been upgraded and new features has been added. There is now a XML datatype you can use for columns, variables and parameters. It can represent XML fragments as well as complete XML documents, and it can be constrained using a schema. It is queryable with XQuery, although only a subset of the features of XQuery are implemented since XQuery has not yet achieved standard status, and Microsoft want’s to reduce exposure to any changes. It is updateable with XML-DML. It can be indexed. It ensures that it is well-formed and validated. Internally the XML is saved in a binary UTF-16 encoded format, a LOB, which can be as large as 2GB.

The session continued to look at the details about the new datatype, the handling of XML and use of XQuery and improvements to FOR XML.

There are whitepapers and other resources availble to read to learn more about XML and XQuery in SQL Server 2005.

11:44 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-15

Windows Forms in Visual Studio 2005

[WCD324] Windows Forms: An In-Depth Look at Windows Forms in Visual Studio 2005

Presented by Brian Noyes, IDesign

This sessions presented news in Windows Forms Developemt, the content overlapped somewhat with the news in VS.NET IDE session.

Databinding

The VS.NET 2003 databinding features are still there, but there are also alot of new stuff in there. As many of the cool things that can be done with databinding some are not really applicable to N-tier Enterprise Development. Other things are though. The most interesting thing with databinding that I could see was the object databinding design features.

Placing Controls

There are now so-called snaplines, to simplify aligning controls on the design surface. There are now two types of layout to select from, TableLayout or FlowLayout, like a web project.

DataGridView

The DataGridView control is an evolution of the DataGrid control that contains everything that developers found themselves extending the DataGrid with, every time, and more. A BindingSource is an object that handles Synchronization and Notification between the source and the display control.

Office look and feel

There are many new controls added to the toolbar in Visual Studio 2005. Many of them bring the Office look and feel to Windows Forms development, for example the ToolStrip control. you could of course also do custom rendering in a very easy way to menus and toolbars.

Developer efficiency

This was a bit of a repeat from previous sessions. Smarttags, refactoring, inline-watches etc.

Asynchronous work

There are improvments done to handle asynchronous work in form of the BackgroundWorker class that takes care of all those details that had to do with when you could and could not change GUI controls from other thread, invoking them etc. You don’t have to know or care about multithreading issues to do asynchonous work in windows forms when using this class. Juval Löwy (also IDesign), that presented a previously attended session, has made an implementation of BackgroundWorker for .NET 1.1 presented in an article here.

Settings Editor

Again, somewhat of a repeat. The settings editor is now strongly typed. There are both Application and User settings available for editing. The User settings will be saved to a location below c:\Documents and Settings\Username\Application Data\Xxx, similar to Isolated Storage (it might be even using Isolated Storage under the covers for all I know).

11:57 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-14

TechEd Day 4 Summary

Thursday.

  • The Fallacies of Enterprise Development
  • Event-Driven Architecture
  • 1-Hour Service-Oriented Connected Systems Bootcamp
  • Understanding Transaction Isolation in SQL Server 2000 and SQL Server 2005
  • What’s new in the Visual Studio 2005 IDE

This day was the only day I skipped a session. We skipped the second keynote session to have some time to look closer at the Exhibition, check out some resources and have time to “ask the experts” about some issues.

The evening ended with the Microsoft TechEd 2005 Europe Party featuring a U2 cover band and a group called the scissor sisters. There was also food and drink, and some games, like pool available. Keep an eye out for the pictures, I’ll be posting them shortly.

08:18 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

2005-07-13

What’s new in the Visual C# 2005 IDE

[DEV341] What’s new in the Visual C# 2005 IDE
Presented by Juval Löwy, IDesign

There are lots of news in the new Visual Studio environment that has nothing to do with the new version of the framework, the CLR or the compilation of code, but are simply improvements to the tools. This session was about those things. I will only mention the coolest and must useful features.

One of the first things people are likely to notice is the change tracking functionlity represented by the yellow and green lines to the left of the code window. They indicate compiled (green) or not yet compiled (yellow) lines of code.

There are lots of new rules for how you wan’t to format your code. Formatting now also work on code that you copy/paste or could be done when you end line of code, typing the ;. You can even make it so that code gets automatically formatted for you just the way you like it when you open a new source file from whereever, written by whomever that don’t structure his code the way you wan’t it. Now one immediate question that pop into our heads as we listen to this was: Will this cause trouble in a versioning control system if individual developers alter the file to their liking. The way you handle this is by using a team settings file where everyone uses the same settings. To take a quote from a previous session (Richard Taylor from Mindsharp) “If you don’t, you will suffer” (then mentioned side by side by Sharepoint do’s and dont’s, but just as applicable here).

There is extended Code Analysis features, you can receive warnings when you break for example naming rules. The feature is highly customizable and extendable.

The resource editor is improved from the previous version. Now in my opinion this isn’t really as much an improvement as it is a “get it right” issue, because the previous version… well, it wasn’t all that good.

There are “Inline watches”. These are very cool. When you are in debug mode an hoover over a variable (of dataset/datatable/string type) you can use the mouse to access a small tooltip style menu that appears and browse the obejcts properties etc. This as well is extensible so that you can build your own debug visualizer.

The docking of windows has been simplified. Dragging and dropping windows has always been very flexible, but it was sometimes far from easy to get the window to go where you wanted to. Of course, once you have it where you want it you very seldom move it again, so this feature might not be on my most used list.

Edit and continue. Many of you (especially those of you who are VB programmers) might cheer at this. Actually, they told us a story about this, at one of the other seminars I think, how it cam to be. As it turns out the process for what features gets added is determined by a vote. Everyone in the development team has a few (5) points that they can place however they wish on different features and they then get developed by that priority. The C# team voting had things like Generics at the top, with edit and continue scoring 0 (zero) while the VB team had it at the top. Now since the tool is for both VB and C# it was put into the tool.

Refactoring. There are quite a few options to the refactoring engine, some I will more the likely have great use for. It’s not massive though and is referred to as “a medium weight refactoring engine”. The cool thing about the refactoring engine though is that it isn’t using search and replace or other text manipulation schemes to do its work, no, it’s using the compiler.

Settings. There is a settings editor allowing read/write access to your config file.

16:58 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

2005-07-12

Understanding Transaction Isolation Levels

[DAT301] Understanding Transaction Isolation in SQL Server 2000 and SQL Server 2005

Presented by Kimberly Tripp

The ACID rules for transactions – Atomicity Consistency Isolation Durability. The Isolation concept states that a transactions sees the data in the state it was before another concurrent transaction modified it or it sees it the way it is after that transaction has completed, it does not see it in a in-between state.

Isolation Levels
Level 0 – Read Uncommitted
Level 1 – Read Committed
Level 2 – Repeatable Reads
Level 3 – Serializable
Default Isolation Level in BOTH 2000/2005 is ANSI/ISO Level 1, Read Committed. The way it’s implemented means that by default, we use locking.

Read Uncommited

Dirty reads. A transaction can read another transactions uncommited changes (which might be rolled back). DML statements always use exclusive locking. Row locks are not used when reading (SCH_S locks are used) and locks on data being read are not honored.

Read Commited

Unconsistent. Only commited changes are read. DML statements always use exclusive locking. Locks are released as they are read, reads may thus be inconsistent and not be repeatable (produce another result) when read later in the transaction.

Repeatable reads

Data read are accesible to other transactions, but only for reading (not DML). Any row that you read will thus produce the same result if read again later. Phantoms. Rows that were not present in the beginning of the transaction can appear if the same query is executed later.

Serializable

Data read are accesible for read to other transactions, but only for reading (not DML). Data read is protected so that data cannot enter the “set” during the transaction. If the same query is executed later in a transaction it will yield the same result. To protect the set index locks are taken. If no proper indexes are available to lock the set lock escalation to higher levels (ie table) might occur.

SQL Server 2005

SQL Server 2005 introduces a row-level versioning mechanism that can be combined with the default Isolation level of Read Commited, called Snapshot. There are 2 levels where snapshot can be implemented: at the statement level (Read Commited using statment level Snapshot Isolation, RCSI)  or transaction level (Snapshot Isolation). Both these make use of temdb.

Read Commited Snapshot Isolation (RCSI) – READ_COMMITED_SNAPSHOT

No phonomena possible in a single statement, however a multi statement transaction may still produce different results.

Snapshot Isolation – ALLOW_SNAPSHOT_ISOLATION

This allows a user to ask for Snapshot Isolation, it’s not on by default. Data read will be the same for the duration of the transaction, the data will be the version it was when the transaction started.

Overriding database settings

Can be done with the WITH clause in a per-table-per-statement level or with SET on a session level.

22:18 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

1-hour Service-Oriented System Bootcamp

[ARC316] 1-hour Service-Oriented System Bootcamp

This was actually a 1 hour compressed overview of the pre-conference track called Architecture Boot Camp for Building Connected Systems. Presented by the same people that did that track, Arvindra Sehmi and Beat Schwegler, Microsoft.

The most important things is to get the thinking right. If you don’t then everything that comes after will be less good.

Why do we do SOA? To align business and technology, something that is very difficult to achieve. The agile business. To get to the new way of doing SOA we as IT must be more contract oriented, have a outward business driven view as oppsed to looking inward towards the technology.

The 4 tenets (teachings) of Service Orientation

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatability is determined based on policy

The 5 parts that practically any company consits of

1. Develop Products/Services

2. Generate Demand

3. Deliver Products/Services

4. Plan & Manage Enterprise

5. Collaborate

Conceptualizing the business, introducing the Motion Methodology. A methodology to architect, design and develop service oriented, conencted, systems.

The 5 pillars of Connected Systems

1. Identity and Access

2. Data

3. Interaction

4. Messaging

5. Workflow

The 3 part model (which is mostly about Messaging)

See image.

11:54 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-11

Event-Driven Architectures

[ARC311] Event-driven architectures
Presented by Gregor Hohpe, Architect on ThoughtWorks

Most applications are based on the basic tenets of the call stack: one method calls another one, and resumes execution when the called method completes. However, an increasing number of technologies such as Web services, Indigo, etc., use alternative, event-driven processing models. These models result in very flexible, composable systems but can be challenging to manage. This talk examines the underlying principles behind event-driven architectures, and explains common patterns and best practices for non-call stack-based architectures.

Event-driven architecture is not a new concept, it’s just another way to describe a WebService architecture.

Each node in an Event-Driven Architecture (EDA) does not interact with its environment in any other way then reacting to the events that come in.

Synchronous (Call stack) <-> Asynchronous (Pipeline) arcitectures.

Coupling
When you connect two systems together, there are coupling. There are a lot of facets to coupling though and it’s not all bad. Loose coupling is better if you want to enable a large amount of individual changeability. Loosely coupled systems are more difficult to build and more diffucult to debug. You need to find a balance suitable for your requirements.

Composability
The ability to build new things from existing pieces. Being able to recompose the system at runtime. Requires interoperability, reduced assumptions between components, location decoupling, configuration and validation. A very highly composable system might nine times out of ten be less usefull. Don’t underestimate how appealing it can be from an architecture perspective to build like this, think about the purpose of the system as a whole. Benefits are reuse and testing (specifically easy and effective unit testing with good code coverage).

Key Mechanisms

-Inversion of control and Dependency injection (they are synonyms). Controls still talk to each other directly.
-Channels. Controls do not talk directly to each other. They talk to a channel. This makes for a simplified interaction. There are however naming issues with channels and messages sent over those channels, most of those issues are related to message routing.

Tracking

Variability with Traceability, Trackers. To get to know, and visualize, the current system architecture of a composable system.

22:34 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

The Fallacies of Enterprise Development

[ARC312] The Fallacies of Enterprise Development

Ted Neward presented the fallacies, that is the things that can go wrong with, enterprise development. The session content can be summarized with this quote and list:

“Essentially everyone, when they first build an enterprise application, makes the following 10 assumptions. All turn out to be false in the long run and all cause big trouble and painful learning experiences.”

1) The network is reliable
2) Latency is zero
3) Bandwidth is infinite
4) The network is secure
5) Topology doesn’t change
6) There is one administrator
7) Transport cost is zero
8) The network is homogeneous
9) The system is monolithic
10) The system is finished

Some snippets that clung to me while listening:

  • “Something that only happens once in a million will happen next tuesday” – Meaning that in a high OLTP system transactions occur so frequently that something that only happens every once in a million will in fact occur very soon. Plan for it.
  • “At any given point you should be able to walk up to the database and flip the switch, and the system should take appropriate actions to continue running, if you cannot do that, then you are assuming reliability.”
  • Bandwidth is not infinite. The smart client application for example is a network enhancement. By having all presentation logic on the client site only the needed data is sent over the network.
  • Security. “A castle with infinitly high walls around 90% of the property isn’t really effective”.
  • Build administrator friendly systems. Build diagnostics and administrative features into the system from the start.

21:56 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

What’s new in the Biztalk 2006 runtime

[CTS304] What’s new in the Biztalk 2006 runtime
or Biztalk Internals

Presented by Jeffrey Wierer, Microsoft

To put it simple, What does Biztalk do? It takes data from one place, do some type of action or transformation on the data, and output it to another place.

The start of the session descibed how was spent describing how Biztalk does messaging. 2004/2006 Messaging concepts are the same. Biztalk 2004 was a revolution from 2000 which introduced the .NET framework, 2006 is an evolution.

Key design points in Biztalk 2006

  • Deeper Integration into and adherance to the Windows Server System.
  • Virtual Server 2005, 64-bit, .NET 2.0, SQL Server 2005 will be supported.
  • Simplified setup, update, deployment.

Improvements

  • Pipelines
    -Large Message Parsing
    -Large Message Mapping
    -Pipeline API accesibility via ODX
    -Recoverable Interchange Processing, ie if one part of a batch fails the others can still succeed
    – 2004 only had Standard Interchange processing, ie if one part fails the other will also fail even if they in themselves were correct
  • Recoverable Interchange processing
  • Failed message routing

    Allowed in 2006, Biztalk can subscribe to errors and take special action.

    Message resume (Message send resume was available in 2004, but not receive message resume), new in 2006 is also bulk actions, ie bulk resume or bulk terminate.

  • Message resume
  • New adapters (out-of-the-box):
    *MSMQ adapter
    *MQServies adapter
    *POP3 adapter
    *Sharepoint adapter (community releases exists already for 2004 a gotdotnet)
  • Enhanced
    *SMPT adapter
    *Usability
    *More granular performance counters
  • Adapter in-order-delivery
  • Updated Adapters & Developer tools
    *Flat-File Import Wizard

      SQL-Server-like relative field position delimiter functionality, no need to count positions. It works the same way as when you import a flat file into SQL Server, you get to visually determine where the fields start and end. This brought down applaudes from the audience. I (Johan) haven’t worked with Biztalk enough to know what they were really clapping about but I can imagine having to count characters to know were to delimit the columns gets old fairly quickly.

    *VS.NET 2005 support only, no support for 2003.
    *Built on .NET 2.0 framework
    – Existing applications need to be upgraded to run in 2005, upgrade tools are available.
    *Improvements in Orchestration
    – Zoom in/ Zoom out
    – Collapse and expand shapes
    – WebServices support
    – From an development process, administration aspect it is now easier to start Orchestrations, you can now just start all, without knowing what the dependencies etc are. There is also no longer any need to refresh in the configuration manager / enterprise manager that you had to do in 2004. When you so something, the display is updated.

The session also featured a demo using some of the new adapters, namely POP3 and SMTP.

21:42 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Man arrested for using anothers WLAN

It’s a good things we are not living in the US. Apparantly there is a law there that can get you arrested for using an open wlan. Check out this article. It’s healthy to keep that in mind if you ever visit. I agree with the person commenting on the end of it though: “Don’t the police have anything better to do!?”

14:48 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Nyheter och politik

TechEd Day 3 Summary

If nothing else is noted we both attended a specific session, if we went seperate ways, and we did, a character (J) or (M) will denote who took the specific session.

  • Building Smart Client Applications with .NET: The future of Software Development (J)
  • SQL Server 2005 Table and Index Partitioning: Improving Scalability and Manageability for Large Databases (M)
  • Visual Studio Tools for Office: Building Office Solutions Using Visual Studio Tools for Office 2005 (J)
  • SQLCLR vs. T-SQL: Best Practices for Development in the Database (M)
  • Developing Site Definitions and Templates for Windows SharePoint Services (J)
  • Implications of “Indigo” and Service Orientation for Architects and Architecture (M)
  • WS-I_M_REALLY_CONFUSED
  • Index Creation Best Practices in SQL Server 2005
  • Best Practices for Content Management Server (CMS) 2002 Development (J)
  • The Day “Indigo” Met the “Tiger” (M)
  • What’s New in BizTalk Server 2006 Runtime

The evening ended with the Swedish Atendees Party and the Café Bar American at Leidsplein where food and drink were served courtesy of Microsoft Sweden. There were time to talk to customers as well as Microsoft representatives. They had a buffé dinner and drinks, on the house, and passed out caps. The Microsoft sweden caps were way better looking then the official TechEd cap, not that I wear a cap all that often anyway, but still.

13:05 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

MCMS Development Best Practices

[PRT335] Best Practices for Content Management Server (2002) Development
Presented by Arpan Shah, Microsoft

The presentation began with Arpan telling us that CMS has a bright future. It’s a system built on .NET, it takes full advantage of ASP.NET and it has a strong user community surrounding it. There are alot of user samples and tools to improve management and development productivity, gotdotnet.com for example. He also introduced some of the people supporting the community, such as Stefan Gossner, Mark Harrison, etc.

Do’s and Dont’s

  • ASP.NETs best practices apply to CMS.
  • Don’t touch the database
  • Don’t have more then 200-300 containers in a container (container = channel/posting etc) (Use CMS Health Check to identify those places)
  • Minimize the use of templates
  • Don’t have hundres of placeholders in a template (keep it below 30)
  • Don’t search by custom properties
  • Don’t have more then a dozen or so top level channels

Workarounds

  • Security API – you can authenticate against other sources and then use CmsAuthenticateAsUser, this still requires a AD/local user to login as though.
  • Custom property search – Put your custom properties in a custom table

OutputCache

  • Add the OutputCache directive to pages and user controls
  • Inherit from CmsHttpApplication
  • Add the VaryByCustom attribute to the OutputCache directive
    -For Personalizationa the value for this is CMSRole or CMSUser
  • Calling AddValidationCallbackAllCmsContent will invalidate all cache if anything (anywhere in the site) has changed
  • If you want to customize caching further
    -Cache by a custom attribute
    -Override CmsHttpApplication.GetVaryByCustomStringToken in global.asax

If you have anonymous or read-only sites, consider turning off features that has to do with authentication and publishing.

Placeholders

If you are developing a custom placeholder the following three, in order of presidence, should be your first options: 1) Check gotdotnet for an existing implementation of what you want to do, 2) Inherit from one of the existing placeholders and extend it’s functionality, 3) Inherit from BasePlaceholder.

Authoring

For richer web editing, check the Telerik controls. There are also alot of free controls available to enable better authoring of custom properties, give a different look and feel, provide som of the site manager functionality in the console. There is also an entire whitepaper dedicated to integration with Word.

Future

CMS 2002 SP2 will be available in the VS/SQL 2005 timeframe, (week of November 7th). It will feature VS/SQL 2005/.NET 20.0 support, however there is no intent to support WebParts. They might work, and they might not, but it will be unsupported.

As a best practice, and to prepare for the next version, write modular code. Encapsulate reusable code, use base templates. Seperate business and presentation. None of these are really CMS specific.

SP2 will contains a Site Navigation Provider, emphasizing that the presentation shouldn’t have any dependensies on CMS.

CMS vNext will be part of the Office System, and as such will be realeased during the second half of next year (2006).

The last part of the session was the presentation of some resources ofr extending CMS
*Search – SPS (not free), Mondosoft, Coveo (free < 5000 pages), Snow Valley
*Workflow – K2, Teamplate, Skelta (free for 25 concurrently active workflows)

12:50 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Visa fler inlägg

2005-07-10

The Day “Indigo” Met the “Tiger”

[CTS381] The Day “Indigo” Met the “Tiger”
In the 2002 TechEd Don Box was holding a seminar about .Net and where he played the role of Morphius from the Matrix movie. Upon the question where or not to leave the world of DCOM and enter the world of .NET, Don Box brought out the red and blue pill.

Ted Neward made me having a similar experience today. –Indigo is definitely my new friend!

As supposed to previous seminar about Indigo, this one gave a very good picture of what Indigo is and how this will affect the way we will design and develop software on a Microsoft platform in the future.

The seminar started of by identifying key problems of any interop technology. Even though transport types such as Web Services simplifies bridging between platforms, what do we do with other transport types such as queuing? And how do we solve transactions and security cross different transport types?

So what are the most important aspects of Indigo?
• Unification of several existing Microsoft technologies
• Interoperability with Non-Microsoft Applications
• Support for Service-Orientatied development

This means that using Indigo, techniques such as DCOM, Web Servises,.Net Remoting, Enterprice Services and message queuing, will all get transparent to the developer.

As the title of this seminar implies, this seminar was not only about Indigo as a platform, but also about how you could achieve interoperability with non-Microsoft applications. This was well demonstrated by Ted Neward.

/Mikael Håkansson

16:55 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

SQL Server Index Best Practices

[DAT328] SQL Server 2005: Index Creation Best Practices

Presented by Kimberly Tripp, SQLSkills.com.

While many indexing best practices continue from SQL Server 2000, there are some new features of which you should be aware. SQL Server 2005 offers more options to cover queries alleviating some SQL Server 2000 restrictions. In this session, we will review a good strategy for table clustering, look at supporting non-clustered indexes and focus on non-clustered covering as well as Indexed Views. Finally, in addition to index creation we will briefly cover the new online operations and how they will impact your table design. If you want optimal table structures, better data availability as well as a strategy for indexing for performance, this is the session to attend!

The official session desciption was very accurate. It all boils down to that there is absolutly nothing that will give you as big a performance gain for as little work as with Indexing.

Index creation

To create the right type of indexes, what you think will be the best taking into account the usages of that table, is extremely important. There are also some basic best practices that you can follow to ensure that your queries are more likely to use indexes.

In SQL Server it is generally speaking better with fewer, more strategic, and wider indexes then many narrow indexes. Generally indexes are used to speed seeking. This does not mean that it’s used only by selects. Inserts, Updates and deletes also uses indexes.

Query optimization

How do we make sure the indexes we create will be used? In short, the queries we write should limit column select list, limit the returned rowset and the code in them should be designed for seeks, avoid scans – this alone can cause huge gains.

Also when talking about indexing, consider Indexed Views, they can on occasion be the ultimate solution.

Index concepts
Book analogy – think of indexes as a book. The book has one physical order (this is the Clustered Index). It can then have many indexes (Non-Clustered Indexes) that require a bookmark lookup, for example animal by common name, animal by scientific name, animal by habitat etc.
Tree analogy – when searching for data. If you have the ability to start at the root and then know what branches to take to get to a specific leef that would be a seek. If do not then you would have to check every leaf to see if it corresponds to the criteria that you  have, that would be a scan.

Index Usage
Referring back to the previous analogies it’s fairly clear that the actual leafs are the data, non-leaf levels are used for navigation. In Clustered Indexes the leafs contain only the data. In Non-Clustered Indexes the leafs contain the data + CL key (and any extra additionally included columns, a feature new to SQL Server 2005).

Heap
A table without a clustered index is called a heap. Since you are almost always recommended to have a clustered index I wont mention anything more about it.

Index maintenance
Although this session wasn’t about index maintenace there were a few things that had to be mentioned. After designing the indexing, when the system is running, use the ITW (2000) or DTA (2005) to capture workload and refine indexes. Keep Auto Create and Auto Update Statistics on. Maintain indexes by rebuilding and/or defraging tables and indexes.

Resources
There are alot of resources to prepare for SQL Server 2005. Go prepare!

14:40 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

WSS Site Definitions and Templates

[PRT391] Developing Site Definitions and Templates for Windows Sharepoint Services
Presented by Patrick Tisseghem, from U2U

Site definitions are certainly one of the hottest topics in the SharePoint community. Come to this session if you want to learn how to create new site definitions from the start to the end. We will cover how to get you started, how to tame CAML and how to package your new site definition to make it available commercially.

This session covered WSS Site Temaplate Architecture and Site definition Internals. The slides for this presentation are perfect for someone wanting to get to know the WSS folder structure.

Sharepoint – a development platform

Microsoft wants you (the Companies building applications) to leverage Sharepoint as a development platform, creating applications on top of Sharepoint. In itself, out of the box, Sharepoint is a good product, but there are some steps you are likely to want to take before using it. Like customizing the site. To customize the site you should really use Frontpage, that is the preferred way of doing it in both the current and upcomming version of Sharepoint.

As an internal testimonial of the fact that Sharepoint is a good platform is the many other Microsoft products that leverages it. For example there are Project Server, VSTS, Biztalk (which has Sharepoint adapters) and the Great Plains (which isn’t really used much in europe but is very successful in the states).

Site definitions

Site definitions represents the lowest level site templates. They consist of aspx and xml files. The xml files are CAML – the Collaborative Application Markup Language. It is poorly documented. When developing your own site definitions, make copies of one of the existing ones an go on from there. Do not change the default ones. Most often a copy if the STS site can be a good place to start. This session covered the basics of the directory and file structure, what directories and files contains what, how do you alter them, how to know what to add and where to add it. Again, the slides for this presentation are awesome, some well worth printing in a A3 format posting them on the wall when you are new (or even when you are experienced) with Sharepoint.

As part of the session a new site definition was created changing all the files necessary to be able to use the definition in WSS.

All the steps to create a custom site definition will be published as tutorial on MSDN in August. At the end of september there are plans to make a wysiwyg CAML designer available. There are also resources like SharePad and Sharepoint Developer Project Kickstart on gotdotnet.com, among others, that holds great resources.

12:47 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

SQLCLR vs. T-SQL

[DAT400] SQLCLR vs. T-SQL: Best Practices for Development in the Database

Gert Drapers, Microsoft, clarified the difference using compiled logic as to using interpreted logic. He also made it clear that T-Sql was here to stay, and that SQLCLR should only be used for none-data centric methods. But he also proved there to be a great benefit using SQLCLR when it come to operations such as string handling or other kind of numeric operations.

/Mikael Håkansson

12:23 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Visual Studio Tools for Office labs

Visual Studio Tool For Office (VSTO) [visto]

This lab explored the new VSTO tools integrating with excel and office documents and action panes. It showed how to incorporate logic in a user interface known to the user, Office. This lab was divided into two parts of which I made one of them.

Lab materials

Also, all lab manuals will be included in the post conference dvd. This is useful for some, but it still requires that you have a correctly setup environment to run the labs in. As far as I have been able to find out the lab materials themselves, and of course the Virtual Images that they run on which has the correct environments, will not be included, only the lab manuals. The latter I can understand since that would mean an awful lot of dvds per person, times, what.. with the atendee count being 6500 or so. that’s alot of dvds.

11:44 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

Building Office solutions

[WCD361] Visual Studio Tools for Office: Building Office Solutions Using Visual Studio Tools for Office 2005
Presented by Ken Getz, Microsoft.

Different presenters have different styles. Take this guy for example, he must be one of the fastest talking guys around. Don’t get me wrong though, he did it good.

With The Visual Studio 2005 Tools for the Microsoft Office System, in short VSTO, pronounced [visto], building Office applications have gone from “Not exactly being very difficult, but…” to “really easy”. They idea is not only to enable you to harness the power of Office, but to relieve you of having to write the application plumbing that Office can provide for you.

I believe that the new advances of Windows Forms to simplify Office integration will produce an increase in applications being built in Office. They might not exacly produce an explosion, but still… These are the topics covered.

Controls

All commong controls Windows Forms controls can be used in a VSTO project. There are also some special Office controls, Host controls, available (like the Bookmark, Action Pane and Smart Tag controls). You have full design/debug support from within visual studio for your applications running in any of the Office products. All design is drag and drop as with any Windows Forms project.

The controls put into an Office document are actually host within an ActiveX document. There us a generic ActiveX wrapper that is used.

Most often you end up putting much of the application logic in an Action pane control, visualizing it in the document itself. Smart Tag controls are one of the really cool controls. I guess most of you are familiar with the concept of a smart tag, especially in Office. They are the miniture popup dialogs that appear and for example ask you for formatting when you paste something. You can hook into these defining your own terms (strings or regular expressions) for them to react to and the actions (which can be any code, ie WS calls
or whatever) and register them to the document.

Secure by default

As part of the SD3 initiative, Office applications are secured by default, meaning that by default your applications that you build are not allowed to run on another computer, they have to be explicitly allowed.

VBA or .NET

In old versions of Office enabled application you used VBA code. You still can, and VBA code and .NET code can exist side by side for a document. It is however recommended that you do not have both VBA code and .NET code simultaneously, at the very least not reacting to the same event. If you do, it is not deterministic which of the events will fire first. Of course the preferred way of doing things are now the .NET way. There are alos a couple of things that aren’t possible with VBA that are now possible with .NET, such as WS integration.

VB.NET or C#

For the C# programmer there is an overhead (in effort) to using VSTO. This is because the underlying APIs are built primarily for Visual Basic, with for example optional parameters. There are white papers available on how to use VSTO from C#.

Demo

The word application demoed shortly in the previous session [WCD201] was examined more closely, looking at the under-the-hood code. We looked at things like how the documents fields and properties are defined as xml nodes, what the events are and how you do databinding (which by the way naturally follows the new Windows Forms way of databinding. There are also the Application object, the Globals.ThisDocument, the Globals.Sheet1 etc. A lot of the things you can do in VSTO can be tributed to the advances of Windows Forms.

11:41 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Building smart client applications

[WCD201] Building smart client applications with .NET
Presented by Tony Goddhew, Microsoft.

Taking this session was a split second decision early in the morning after the feeling that my schedule for the day contained a little too much Sharepoint, I wanted to spread my graces a littlt bit more.

This session, being a level 200 session, was meant to give an overview of what a smart client application is, an in what ways you can produce a smart client application using the .NET framework. Why and when to use a web presentation layer and when to build a richer (smarter) client. And what opprtunities there are in the .NET framework to realize this.

Outlook for example is a very rich client, or a rich web experience if you use web access. Each type of client has its useage scenario.

Escalation

The escalation ladder is basically: a web application -> a web aplication with Windows Forms controls -> a “premium” smart client downloading the GUI as a separate application to the client and running it from there. Where as the latter can be included as a Office Smart Client using VSTO.

Smart clients

Per definition a smart client is an application that: Delivers high developer productivity with central administration making use of local resources to deliver a pleasing user experience. This means some kind of Windows Forms application (VSTO counts as such) and eventually Avalon. The Compact Framework and VSTO are technologies that can help implement a smart client. Customers want applications that look and feel like Office, since it’s a known user interface, VSTO gives you the opportunity to do this. The Click Once concepts also helps build the smart client concept due to it simplifying application deployment and therefore administration. Avalon will be extending the rich client experience. It uses declarative model and code, and spits UI and code enabling the UI designer to work on the design and the developer to work on the business functionality. Kind of like aspx and its .cs files. Also mentaionable was that the presenter presented Avalon as “Avalon is not a replacement for Windows Forms, it is an extension” in case you might have been wondering.

Demos

As a main demo the fictous company Contoso Realty was presented and the various ways to communicate with them, and the ways they communicated with themselves, were presented.

The session ended with the Media Mania Avalon demo. I had seen that before, they like demoing that, and granted, it’s a very visually appealing application.

11:16 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-07

System.Transactions

I took a Self Paced Hands-On Lab about System.Transactions.

This lab was really short, there really isn’t much to it, to work with System.Transaction all you really need to do is:

using (TransactionsScope sc = new TransactionScope())
{
// .. code that runs in the context of the transaction here…
sc.Complete()
}

An exception or anything else that causes the sc.Complete() to not be called will result in the transaction being rolled back. The TransactionScope component automatically handles any escalation of your transaction.

18:01 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Reports from the Exhibition floor

Oracle is giving away an IPod Photo 30GB. The idea is to show up at the Oracle stand wearing the shirt which you can pick up for free at the Oracle stand. There were probably like 100 or so persons there the first day, when I was one among them. The draw is made by selecting one of four registration iPacs which then randomly selects a name from within it. Probably 30 or more names were called before they finally called the name of someone who were actually there, and when they did, the poor smock wasn’t wearing his T-shirt. So he got nothing. After another 10 names they had a lucky winner. One is to be drawn each day. I am not sure how the other days progressed but there is no reason to think it would be any different.

I was looking at Microsoft branded cloathing to wear, thinking it might be a bit cheaper because it’s a conference and because it’s branded. There were no such thing going on though. The rugby shirts were €30 and the T-shirts €10. Since the usage for such clothing is fairly limited in its use (meaning you can wear them everywhere but it’s in a limited population that the Microsoft logo caries any recognition), I bought none.

I did however pick up a stack of good to read magazines that I hadn’t gotten already. Among them a couple of copies of SQL Server Magazine.

The bookshop was another example of something that I would have thought was going to be cheaper then usual, but weren’t. The discount that we get at Akademibokhandeln outweighed the prices of the books are this place, so no books either.

We took skills assesments today and got SQL Server 2005 Beta books as a thank you. Picked up a free book about Visual Studio 2005 Team System as well. Now we’ve got something to read during vacation 😉

Intels got balls! Literally. They had little blue and white balls, not unlike the juggling balls we had in the GIT project. It was sooo close that they ended up flying all over the place…

17:58 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-06

TechEd Day 2 Summary

If nothing else is noted we both attended a specific session, if we went seperate ways, and we did, a character (J/M) will denote who took the specific session.

  • Keynote
  • Microsoft Visual Studio 2005 Team System: Managing the Software Lifecycle with Visual Studio 2005 Team System
  • SharePoint Portal Server 2003: Best Practices for an Implementation (J)
  • SQL Server 2005: Bridging the gap between administration and development (J)
  • Microsoft’s Integration Technologies: When to Use What? (M)
  • Basics of XML and XQuery (M)
  • Creating Dynamic Web Sites with ASP.NET 2.0 Web Parts (J)
  • Hands On-Labs: Biztalk 2004 Nuts and Bolts (J)
  • Hands On-Labs: Biztalk 2006 (M)
  • The exhibition hall
  • The TechEd welcome reception

20:13 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Hands On-Labs Day 2

The Hands-On Labs are Self-Paced using Lab guide that will eventually be included on the post conference dvd. There is a large hall with computers where you start a specific lab by launching a VPC configured specifically with the environment required for that lab. I’ll be adding a picture of the hall to this post later.

Biztalk 2004 Nuts and Bolts

This was the same content as in the Biztalk course that I attended early this spring. The catch up was good though since I haven’t done anything Biztalk since. It comes back quickly though. Having spent about 2 hours on it today I have made all of the relevant exercises with schemas, orchestration, transactions, health and activity monitor and WebService. Of course, it contained more labs then that, I skipped some.

Biztalk 2006

The new version of Biztalk contains no major enhancements to the human workflow engine.

20:06 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

ASP.NET 2.0 WebParts

[PRT395] Creating Dynamic Web Sites with ASP.NET 2.0 Web Parts

Presented by Andres Sanabria, Microsoft

Drill down on the new Web Parts infrastructure in ASP.NET 2.0. Learn how you can use Web Parts to build rich Web sites-enabling end users to dynamically control the layout of pages-and customize the properties of server controls.

What user has told Microsoft: We want: What we want, when we want it, Where we want it. WebParts solves that.

Future

WebParts will be fully integrated into ASP.NET 2.0. The next version of Sharepoint will work with ASP.NET WebParts. Current Sharepoint technology based WebParts will also work with the next version of Sharepoint. In the long run, the idea is to build ASP.NET technology based WebParts. If ASP.NETs WebParts is to work in the upcomming service pack release of Sharepoint, released solely to support ASP.NET 2.0 with MasterPages etc, is not yet determined, they are working hard to make it so though. The next version of Sharepoint will, of course, be fully compatible with ASP.NET 2.0.

Display modes
Browse, Design (Drag and Drop, move etc), Edit (Display UI to the user to interact with the propertoies behind the scenes), Catalog (Add new Webparts to the page), Connection (Connecting WebParts).

The WebPart Manager

The WebPart manager control is a non visual ASP.NET control that manages all WebParts in a page. The first control to drop in a page. It is required to support a dynamic user interface. Works like any other control. Handles the WebParts in the page.

Perzonalization Engine

The Personalization engine stores data for the WebPart Manager. Based on the Provider pattern. Built in support for SQL and Access. Since it uses the Procider way of doing things it is completely expandable.

WebParts introduces a new machine/web.config section named WebParts, and under that other subitems like perzonalization etc.

Zones

Layout will be managed with zones, as in current version of Sharepoint. Zones depend on displymode, ie: EditorZone, CatalogZone (for both Catalog and Connection) , WebPartZone. Multiple zones can be shown simultaneously. Ie the WebPart is till visible when you enable the EditorZone. The WebPartZone can host any type of control, both ASP.NET, UserControls and WebParts, EditorZnoes etc can only host Editor parts.

A WebPart working page with drag and drop enabled can be created without any code having to be written.

Non WebParts are behind the scenes placed into a GenericWebPart before it gets rendered.

Interfaces and base classes

WebParts are implemented though the IWebPart and IWebActionable interfaces and the WebPartControlBase control. Also the ITextCommunication is available for creating connected WebParts.

EditorParts

Four types of Editor parts: AppearanceEditorPart, BehaviorEditorPart, LayoutEditorPart, PropertyGridEditorPart.

Verbs

Verbs control the behavior of a WebPart, ie Close, Minimize etc. If they are not implemented when you build your own WebPart, or if they are turned of on other WebParts, those actions cannot be taken.

WebPart design in Visual Studio is now drag and drop, which it wasn’t before unless you used the SmartPart downloadable from gotdotnet.

Demo

A demo building a fully WebPart driven page with everything that goes with it was built from top to bottom during the session. Very little code was used though. The WebPart framework seems to be alot drag and drop, all in the “developer/development productivity” line of thought I guess.

Learning

There are Self Paced Hands-On Labs available on this subject, I hope to have time to check those out later this week. There are also free E-Learning as well as other resources available online.

Conclusion

The good news is that the new WebPart development feels more thought through then the current Sharepoint WebPart development.

20:00 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

It’s not all for fun

Just a note for those of you thinking that this might be some kind of vacation in disguise: It’s not! In fact, it’s something of the opposite! Take last night as an example, and no, yesterday was not an extreme, it’s looking to be repeated today. We left the hotel at a quarter to eight to arrive at the convention center slightly past eight. Once there it was an all-day event, the day was packed with sessions and labs. We didn’t get home until a quarter to eleven, and when we did we spent three quarters of an hour analyzing and documenting the day. I personally didn’t get to sleep until well after midnight, and it keeps on going.

Now don’t take this as if I am complaining, I’m loving it. It’s not a vacation, though I might need a vacation to catch up on some sleep after this .

18:10 | Lägg till en kommentar | Permalänk | Visa returlänkar (0)

I hate KFC

Kentucky Fried Chicken doesn’t exist in sweden, now I know why. There is nothing wrong with their food, it’s tasty enough, but the stomach aches that go with it… No fun!

18:02 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Mat och dryck

XML and XQuery

Basics of XML and XQuery

This Chalk and Talk seminar was more down to earth opposed to the more product centric seminars we’ve previously attended. I was looking forward to this seminar because I’ve often come to the question of what would be the best practice of querying an XML file. There are several techniques within this field: For parsing and rendering, xsl/xslt would be the preferred choice, as where gaining specified values within an xml file you would be recommended to use XPath. But if you’d like to retrieve a subset of the xml while still keeping the structure, XQuery might be what you are looking for. The problem is that Microsoft is currently not supporting XQuery. –They did release a Microsoft.Xml.Query namespace with the 1.1 version of .Net. However this is no longer to be foundL.

As Michel Rys opened the seminar by asking -“how many of you are familiar to the basic concept of XML?”, I was concerned that only half of the attendances raised their hand. Even though most of discussion was about XML, it was a quite interesting seminar. As it showed, Michel Rys is a board member of the W3C, and shared much information about their work on XML, XMLSchemas, DTD, XQuery and much more.

I didn’t really get an answer to my question about how and when Microsoft would support XQuery in the future. Apparently W3C has not yet agreed upon how this standard should be implemented. His guess was that it might be ready by next year (after seven years). Nevertheless, we could expect XQuery support in SQL Server 2005.

/Mikael Håkansson

14:34 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Integration: When to use what?

Microsoft’s Integration Technologies: When to Use What?

The session started out with an inventory of all transport integration technologies provided by Microsoft.

  • Message Queuing
  • SQL Service broker
  • SQL Integration Services
  • Indigo
  • SQL Replication
  • Host Integration Server
  • BizTalk Server

Scott Woodgate continued his presentation by dividing these technologies into two categories: Data integration and Message integration, where data integration would be preferred when dealing with large chunks of data. Message integration would of course be preferred where there would be a more “on demand” requirement or where any business logic would interact with the process.

Focusing on the Messaging integration, this category was granulated into:

Direct (RPC, SOAP (WebService), Indigo and HIS)

Queued (MSMQ, Indigo, SSB and HIS)

Brooker (BizTalk Server)

Much time was spent on a discussion about best practices for when to use which method. As much as this was very interesting, it was too long to be described in this blog.

The biggest wow! feeling:

Since I’ve not been working with SQL Service broker or Idigo, I thought this brought much light upon how Microsoft looks on integration in the future.

Interesting, but not new:

As in my point of view, BizTalk differentiate itself by not qualify as a transport technology. However the demo of Biztalk working as a message broker together with indigo was neat.

/Mikael Håkansson

14:29 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

SQL Server 2005: Bridging the gap

[DAT250] SQL Server 2005: Bridging the gap between administration and development.

Presented by Kimberly Tripp from SQLSkills.

Choosing this session started kinda bad for those who attended Kimberlys pre-conference seminars, it was about 25 minutes of repeat of what was said yesterday. Not a good treat to start with.

The rest of the session can be summarized as “News in and when to use T-SQL and/or SQL CLR from an administrative as well as development perspective”.

Key topics

  • CLR development and deployment. Both directly from within Visual Studio (from a developer perspective) but also to a test/production environment (from a dba perspective).
  • XML support, which is now searchable, indexable, can be strongly typed (schema) and displays nicely in Management Studio.
  • SQL Server WebServices – HTTP Endpoints, CREATE ENDPOINT…FOR SOAP…WEBMETHOD ‘Bla’. This is mostly used for non SQL clients, interoperability, easy use set infront of performance.

Transact SQL Enhancements

  • ROW_NUMBER
  • RANK, DENSE_RANK
  • Common Table Expressions
  • PIVOT/UNPIVOT
  • CROSS APPLY, OUTER APPLY
  • TRY/CATCH
  • DDL Triggers
  • Event Notifications
  • Parameterized TOP

She demo’ed how to use the CLR to integrate a Amazon WebService to your database. For this she used T-SQL with the CROSS APPLY, OUTER APPLY, ie:

select * from Actors a
  CROSS APPLY dbo.fn_GetMoviesForActor(a.FullName)

She also talked about when to use XML columns and when not to. Good choices for when to have an Xml column would be for example: large seldom updated data, hierachical data with many sparsely populated extendible attributes.

To be proficient with the new version of SQL Server Kimberly suggst that you should be “Jack of all trades, master of some…”. Ie get to know all parts of the product, but choose only a few to dive deeper into.

14:27 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Sharepoint Best Practices

[PRT371] SharePoint Portal Server 2003: Best Practices for an Implementation

Or as the presenter called it: Ten key decision points to achieve and outstanding sharepoint deployment.

Presented by Richard (Rick) Taylor, a consultant from Mindsharp.

The session was presented as a kind of a do’s and dont’s mostly on a level of design desicions or project management without any look at code or coding techniques. I had kind of excepted more development related best practices such as: change templates this way, dont alter these files this way etc. There are sessions later in the week that will probably go into more detail about such things. This session however was not on that level.

Main issues

The main problem with Sharepoint solutions from the experience of the speaker is that Portal/WSS concepts and differences are not clear. Portal is not for collaboration. WSS is for collaboration. Collaboration is for the sites, not for the portal.

It is important to understand that Sharepoint is NOT a document management system. Document workflow does not exist. If you bought it for that, take it back. Sharepoint is NOT a content management system. If that’s what you want, get MCMS. Sharepoint IS however a document collaboration system.

Three reasons for getting Sharepoint Portal Server
1. Aggregate info.
2. Indexing and Search.
3. Personalization, My Site.

It’s important to know (to stress) that the sites and workspaces are where collaboration happens. Not the portal.

Top five things you need to get user acceptance for a Sharepoint project
1.Champion (someone who speaks loudly and preferably enthusiastically on your behalf)
2.Funds
3.Grassroot support
4.Clear project plan with milestones
5.Patience (noone likes change)

These points aren’t unique to Sharepoint prcoducts though.

Roles to have on the SharePoint Planning (Project) Team
Project Manager, Project Sponsor (Who holds the funds), Stakeholder (with interest in Sharepoint like solutions), Architect (with detailed knowledge of Sharepoint), IT-staff, Developer, Testers, On-Site trainers, Help Desk Staff and Technical Writers.

Sharepoint installation
1.Do not uninstall, doesn’t work good, if you need to uninstall, nuke it (fdisk, format, install OS etc from the start).
2.Dont install WSS before Portal, it can get you into trouble, let Portal install WSS.

Libraries and Lists
Dont upload all documents into the portal, keep them in fileservers etc. Uploading them will make the SQL Server database unneccesary large. Leaving them on fileservers will give you some management problems though, check in/out/versioning. There are workarounds to get that to work though (which those are were not mentioned). An issue with versioning is that each version is a full document. A 1 mb document can quickly become alot of data if updated frequently.

Expose documents in lists. Dont put over 2000 items in one list though. 2000 will take 2 minutes to display. Surveys suggest that the average time that a person waits for a web application to complete before judging as slow is 6 seconds.

Site Structure

Dont build organization levels deep, go wide and shallow. Or you will hit the 255 URL limit. That is, have many same level organizational units. Do not thread your way into the organization in a overly granular way adding to the URL.

Search
Be careful of what you index, it can use alot of bandwidth. Indexing is also very resource intensive. Medium to large farms require dual xeon and 4gb of ram and very large drives. Gigabit ethernet highly recommended.

Searching is probably the most problematic area in sharepoint. Hacing 50 content source probably requires a half time duty just to maintain it.

On Extranets

When Sharepoint is used for extranets and Active Directory is used for authentication, you will probably use two active directory forests. This will increase staffing for adminsitration. You will also need External Connector. You will also need to plan for the security of such a solution. Really think about the consequences before placing it on the extranet.

Shared Services

Shared Service requires even more staff. If you go to Shared Services you cant go back. It’s also recommended that you have Gigabit ethernet between all servers in the farm. Many times this is not possible. You don’t have to, but it you don’t – you will suffer.

Editing

Only use Frontpage 2003 to edit the sharepoint site. Do NOT use Dreamverver or anything like it. You will eventuellt break it.

Do NOT develop in production. Because if you break anything, chances are you wont be able to fix it.

Sharepoint doesn’t like to get broke, when it’s broke, it’s hard to fix it. It’s more often then not time efficient to nuke it.

Security

To deploy WSS Sites is to give control to the user. On an administration level it might be difficult to control when you have x-thousands of sites.

Audiences

The concept of Audiences in Sharepoint are not for security purposes, they are for personalization. Audiences are not a way to secure information, it only stops you from navigating to them, it does not stop you from directly accessing the information.

Users

Each Personal Portal is a site collection. Using personal portals require education. Setup policies before letting on users. Require education before using users are allowed to use the portal.

Developer Machines

Do not underestimate the need for powerful machines for developers and it’s associated cost. Developers need W2K3+IIS+WSS+SPS+SQL+VS.NET which requires a pretty powerful machine to run smoothly, especially if you run it in a VPC. 1GB of ram, or more, is highly recommended.

IT-pros handling sharepoint need to understand Code Access Security. It’s very important when handling WebParts.

Test and Stage environments

Testing and staging can be made in a Virtual Server environment using the correct configurations of production. This demands a powerful machine, but still, you can get away with only one. Even load balancing and clustering works in Virtual Server.

14:10 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

Visual Studio 2005 Team System

DEV260 Microsoft Visual Studio 2005 Team System: Managing the Software Lifecycle with Visual Studio 2005 Team System – Presented by Michael Leworthy and Eric Lee, Microsoft.

Visual Studio 2005 Team System is an extensible lifecycle tools platform that significantly expands the Visual Studio product line and helps software teams collaborate to reduce the complexity of delivering modern service-oriented solutions. This session provides an overview of the suite, tools and features included and shows practical examples of how this can reduce the complexity of delivering modern service-oriented solutions that are designed for operations.

I found this sessions to be very overviewing, which in its defense was that is was supposed to be. You never really got down to the level that I would have liked to though. Visual Studio 2005, and Team System in particular, is so massive that in one hours time there just isnt any room to go into details.

Importants concepts to knwo from this sessions are that the Visual Studio set of tools has expanded. It now includes special functionality (and in many cases special versions) for Infrastucture Architects, Solutions Architects, Developers, Testers and Project Managers.

There are basically three versions in the Team System set of tools, theses are: Software Architects, Developers and Testers. There is also the Visual Studio Team Foundation Server which includes the Enterprise Source Control, Reporting, Project Portal, Project Management and other items.

Emphasis was also placed on the fact that the data being collected could be used as easily from Project, or Excel, or Outlook, or whatever else your favorite tool might be, as from within Visual Studio or through the Team Portal. The team portal actually being a WSS site as far as I could tell.

Also more tools are built into Visual Studio for performing Code Analyzing and Profiling on the developer level.

Tools for Testers have gone from being an afterthought to being in focus with a special version just for testers. Load testing is one noticeable component for testers that are included.

As far as licensing goes, as a Universal subscriber one had almost gotten used to the fact of having all the tools at a downloads reach withouth hacing to worry about licenses all that much. Well, not any more. You will not get all the tools from the Visual Studio Team System with MSDN Universal. Instead, you get to choose one Architect, Developer or Test. If you want to get the entire suite, it will cost you extra. With one client version one CAL for a server license is alos included. Other then that Team Foundation Server follows the general Microsoft CAL licensing rules. No price levels are yet announced though.

There will also be a selection of free preparation courses made available on Microsoft Learning the time before the release, which is to take place “The week of November 7th”.

08:20 | Lägg till en kommentar | Permalänk | Visa returlänkar (0) | Datorer och Internet

2005-07-05

TechEd Keynote

In this keynote Andy Lees will highlight key Microsoft innovation investments that help you play a crucial role in driving greater growth and business success for your organization.

Andrew Lees is the Corporate Vice President, Server and Tools Marketing for Microsoft Corporation .

That’s the way to do a presentation! Taking a sledge hammer to a router, pulling the processor fans out of an Sun system etc. The keynote was very entertaining. But it was also all sales pitch.

The main topics covered were:

  • Virtualization, playing on the do more with less theme. Being able to run virtual machines, focusing on virtual server 2005.
  • Performance comparison between SQL Server 2000 32 bit up to 2005 32-bit and 64-bit, in a very graphical format. Visualizing how SQL Server 2005 64-bit was able to handle 6 times the load of a 32-bit SQL Server 2000.
  • Cost comparisons, as always these are to be taken a bit lightly, but they showed, of course, that SQL Server had by far the most installations but that they had less revenue from those installations then for example Oracle and IBM with DB2 since SQL Server is more cost effective (as I said, sales pitch).
  • Developer Productivity, showing of how developers can be more productive with the Visual Studio 2005 set of tools. Here Andrew Lees told us that Microsoft has shifter focus from a developer perspective to the development perspective. Meaning the productivity of the team as a whole as opposed to the single developer.
  • Dynamic IT, This was the point of the coolest demos (sledge hammer etc), showing of MOM.
  • The new world of work, how to enable information workers to get the aggregated information they need when they need it.

All very interesting, but no details. However some of the things mentioned are worth to keep in mind and examine more closely later in the week.

BizTalk Server 2013, Performance

Can I have 100 BizTalk Server Host Instances?

On occasion you see these questions. Can I run X number of host instances? What will happen? Without diving deep into the reason why you think you need to, or the details of what is happening inside BizTalk Server when you do, I will present some results of doing that.

First, I needed a machine to play around with. I also wanted a reasonably powerful machine, so I where better to go than to Windows Azure? I selected a BizTalk Server 2013 Evaluation Edition pre-configured image.

image

Choose the Extra Large size – that’s 8 cores and 14 GB of RAM.

image

When that machine was provisioned for me, this was how the performance of it looked:

image

Now I am fully aware that viewing only the Task Managers idea of the processor is a very limited view of “performance” but I am purposefully using that view so that you, the reader, understand that this is NOT intended to be a DeepDive. It is merely an indication.

So, I configured a script I have to create 100 hosts and host instances for me, along with handlers for FILE Adapter for those hosts. But so far no ports and no traffic. This is how that looked.

image

You can see the Processor churning away at about 20% utilization, while Memory is largely unaffected.

Taking it one step further I wanted to make sure that the host instances actually had something to check for, so I created 1 receive port and 100 send ports. One send port for each of the host instances (the Send Port has that hosts handler for the FILE transport).

image

That put the machine under a little more pressure. Obviously it is doing something more when it actually has ports configured. Processor is at ~60%. Memory again not really affected.

Remember now, this is all just doing nothing at this point. Let’s see what happens if we actually do something:

image

That piece of the graph marked in blue above, that’s when I dropped a file into the receive port and receive location that all the 100 send ports (one for each of the hosts) were subscribed on. It went to 100% quickly. All files went through quickly though, and the event didn’t last long enough for BizTalk to start throttling.

So what about the Memory though. Is that really not affected? How much does a BizTalk instance use and wont 100 of them make an impact? Well, it turns out that each host instance will, at this particular point, only use about 20 MB.

image

But ok, 100 host instance is a lot. What about 50? Still the same config as above, but only 50 host instances are started.

image

A bit jagged, but still, running only 50 takes Processor utilization down from 60% to 20%. Now what if we send something through?

image

A short spike when sending the document through to its 50 subscribers.

Taking a little bit of a deeper look at that spike we can see that SQL Server is the main contributor to that spike.

image

Hmmm. Ok, “But” says the customer, I want all of this to be Low Latency. 50 ms polling. Do it!

image

CPU goes up from ~20% to a little under 40%. But it also changes characteristics. When it before was jagged, it now becomes more or less a straight line. The processor does not get to rest, and it does not get spikes in the same way.

Unless of course you send a message through, in which case it does again spike.

image

But it’s just a very short spike. Nothing at all in the way that you can or cannot say that this would be a bottleneck based on this simple test. I have not done any extended tests to see what the MST would be for this machine.

What if we raise the number of host instances just slightly? To 75.

image

See that marked point in time above? That’s where I enabled the host instances. Proc goes to about 60%. 100 again you say? Let’s try it…

image ¨

Again a visible increase in power needed. And the proc now at about 70% and pretty flat.

image

Sending a message through again spikes it.

image

And SQL is the thing that is grabbing most of that processing power.

image

So there you go. That’s what would happen if you run 100 hosts and 100 host instances on a single machine, and if you put them all to poll the database at 50 ms.

This was done on a Windows Azure Virtual Machine with the BizTalk Server Evaluation image in its default configuration. I did nothing to it. No updates, no tuning, no alterations. I know for certain that I can improve the performance of what we see above.

You can draw your own conclusion from the above. My own conclusion is “in-conclusive” ;). That is – I can see that running 100 host instances with 50ms polling on a machine where both BizTalk and SQL share the same machine and the machine is not optimized does not bring down the machine by the share volume of polling alone. However when running even simple traffic through we hit the roof. If this load would be placed on a distributed environment, with SQL and BizTalk on separate machines and SQL with a more optimized storage architecture etc, BizTalk with other configuration such as Global Tracking disabled etc, I should think that the scenario is doable.

I would however highly question why you think you need 100 hosts and 100 host instances. There are a lot of functionality in BizTalk, for example SSO Affiliate Applications, that solve some of the reasons why you would think you need that many. My recommendation is certainly not to go there unless absolutely necessary.

HTH,
/Johan

BizTalk, BizTalk Server 2013, Licensing

BizTalk Server 2013 Developer Edition arrives

Microsoft BizTalk Server 2013 Developer Edition New SKUs and Changes

Effective November 1, 2013, Microsoft BizTalk Server (BTS) 2013 Developer Edition licenses will be available under the Developer Tools license model in the Open, Select Plus and Worldwide Government Partner programs. Previously offered as a free download in prior BTS versions, the new BTS 2013 Developer Edition offers the full functionality of the
BTS 2013 Enterprise Edition, licensed for development and test use only, and includes the newly released Host Integration Server (HIS) 2013 software.

Source: http://microsoft.also.ch/fileadmin/Dateien/Dokumente/Pricelist/November_2013_Price_List_Guide.pdf, http://www.enpointegov.com/newsletter/october2013

You will also find it if you run the Microsoft License Advisor at http://mla.microsoft.com/.

image

The Developer Tools license model

Previously with BizTalk Server 2010 you have bought BizTalk Server Licenses for servers only (Branch, Standard and Enterprise). The Developer Edition was free. With 2013 you must purchase this license per user, if they do NOT have MSDN. This is a per user license.

You must acquire a license for each user you permit to access or use the software. You may install any number of copies on any number of devices for access and use by one user to design, develop, test and demonstrate programs. Only licensed users may access the software.

Source: https://www.microsoft.com/licensing/about-licensing/product-licensing.aspx

My interpretation of “access the software” (but I am not a license expert!) is that it is ok for BizTalk Server to exist in a test environment where it routes traffic and perform integrations to and from other systems that at their end have users, devs or admins that are not licensed. It is also ok for other users of the same server to access the server where BizTalk is installed to, for example, administer Windows (as long as that in itself is properly licensed). The limiting factor are the users that in some form access the BizTalk Server software itself. Such as deploy, configure, administer or in other ways interact with the BizTalk Server GUIs or services.

Price Lists

https://mspartner.microsoft.com/en/us/pages/licensing/price-lists.aspx

Target prices seems to be around $37 Estimated Retail Price (ERP). https://mspartner.microsoft.com/en/us/Pages/Licensing/Downloads/open-license-estimated-retail-price-list.aspx

image

I have also seen $36 for price level C. https://mspartner.microsoft.com/en/us/Pages/Licensing/Downloads/open-license-level-c-estimated-retail-price-list.aspx

MSDN

To those with an MSDN subscription (http://msdn.microsoft.com/subscriptions) the list of subscription types that has access to this seems to be quite large:

VS Pro with MSDN (VL)
VS Pro with MSDN Premium (Empower)
VS Pro with MSDN Premium (MPN)
VS Test Pro with MSDN (Retail)
VS Test Pro with MSDN (VL)
VS Ultimate with MSDN (MPN)
VS Ultimate with MSDN (NFR FTE)
VS Ultimate with MSDN (Retail)
VS Ultimate with MSDN (VL)
BizSpark
BizSpark Admin
Designer AA
DreamSpark Premium
DreamSpark Standard
MSDN Platforms
VS Premium with MSDN (MPN)
VS Premium with MSDN (Retail)
VS Premium with MSDN (VL)
VS Pro with MSDN (Retail)

See screenshot:

image

How do I get it?

Summarizing from above you would either have an eligible MSDN subscription, or you get it through traditional volume licensing channels.

HTH,
/Johan

BizTalk, WCF

BizTalk Send Ports, WS-Addressing, ClientVia and non-http prefixed To headers, Part 2

In a previous post I explained how we had a need to use the WS-Addressing To header to send a non-http prefixed URI, such as urn:company/path/subpath/Service1, and how that was supported, after a fashion, out of the box in BizTalk Server. It did however come with the limitation of not being able to edit the WCF config in the BizTalk Server Administration Console GUI once you loaded it from a binding file. I don’t like limitations.

In this post I’ll show you how you can create a very simple WCF behavior to help you to set a RemoteAddress EndpointAddress Uri to be able to accomplish the same thing, while still being able to continue to edit the port configuration.

WCF behaviors allows you to intercept and inspect or alter messages or metadata, or in other ways modify the runtime behavior or processing of the same as they are sent or received. In this case we are creating a client behavior.

The behavior is in essence very very simple, it’s only purpose is to alter the endpoint address at runtime. The place where I choose to implement this is in the ApplyClientBehavior method of the IEndpointBehavior interface.

void IEndpointBehavior.ApplyClientBehavior(ServiceEndpoint serviceEndpoint, ClientRuntime behavior)
{
    serviceEndpoint.Address = new System.ServiceModel.EndpointAddress(this.Uri);
}

Incidently, I borrowed this implementation with pride from the ClientVia behavior that comes with the .NET Framework. Apart from the fact that that behaviors sets the ClientRuntime.Via property and this sets the ServiceEndpoint.Address property the implementation is very close to exactly the same.

This allows you to configure BizTalk in the following manner.

The “Address (URI)” property can be set to anything (as long as it is http or https prefixed), since it will later be overridden.

image

In the behaviors section we now have two behaviors, clientVia:

image

and the new one I created, which I called remoteAddress:

image

clientVia is configured as the actual URI of the service we need to call, while the remoteAddress behaviors is configured with the value we want to have in the To header.

The solution contain three files of interest.

image

The App.config file holds a snippet of configuration that needs to be placed in machine.config or in the BizTalk WCF-Custom WCF extension part of the Send Handler of the host that handles the WCF calls being made. It exists to make the behavior available to the BizTalk process and looks like this:

<extensions>
  <behaviorExtensions>
    <add name="remoteAddress" type="bLogical.BizTalk.WSAHelper.RemoteAddressElement, bLogical.BizTalk.WSAHelper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3672865486d21857"/>
  </behaviorExtensions>
</extensions>

It points to the RemoteAddressElement class, whose responsibility it is to point out the type of the behavior and create new instances.

The RemoteAddressBehavior then in turn does the already above explained logic.

The project and code is available here.

I suppose a custom pipeline component setting the Address, or a Dynamic Send port for easier cases of configuration might also do the trick.

BizTalk, WCF

BizTalk Send Ports, WS-Addressing, ClientVia and non-http prefixed To headers

Through WS-Addressing services can require a certain value in the ws-addressing <wsa:to> SOAP header. WCF has full support for this. This support is inherited by the WCF-Adapters. When using WS-addressing in a BizTalk Server Send Port what you enter in the “Address (URI)” of the WCF adapters configuration will also be the value that ends up in the <to> header.

Like so:

image 

This will produce the following. Other headers and body removed for clarity.

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing">
  <s:Header>
    <a:To s:mustUnderstand="1">http://localhost:8990/Service1</a:To>
  </s:Header>
  <s:Body>
    ...
  </s:Body>
</s:Envelope>

If you need to have a different value in the <to> header than the actual address that the service is hosted at it becomes a little bit trickier. You need to use the WCF-Custom adapter and add the ClientVia behavior. The value configured as the “Address (URI)” will still end up as the value in the <to> header, but the actual URI that the call will be made to will be the value you configure in the ClientVia’s viaUri property.

Like so:

image image

This will produce the following (again cleaned for clarity):

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing">
  <s:Header>
    <a:To s:mustUnderstand="1">http://somedummyvalue/</a:To>
  </s:Header>
  <s:Body>
    ...
  </s:Body>
</s:Envelope>

Now, as long as the value that you want in the <to> header is http or https (depending on the bindings security settings) then you are fine. However, if you end up needing to have a value in your <to> header that looks for example like this: urn:company/path/subpath/Service1, then you’re in trouble.

You will get an error dialog saying that The specified address is invalid. Invalid address scheme; expecting “http” scheme.

image

Why? Because BizTalk Server in its diligence to help you configure things correctly will force you to enter an URI that is prefixed with either http or https (again, depending on the security setting of the binding). There is no way for you to configure a non-http prefixed port in the adapter GUI (that I know of).

A colleague of mine, Gustav Granlund, experienced this issue and found a solution: you can coup it in through a binding file, and the runtime will accept it. Doing this you can enter a “Address (URI)” like so:

image

And you are then able to send a message that looks like this (to an address of http://localhost:8990/Service1):

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing">
  <s:Header>
    <a:To s:mustUnderstand="1">urn:company/path/subpath/Service1</a:To>
  </s:Header>
  <s:Body>
    ...
  </s:Body>
</s:Envelope>

The caveat is that after having done this you cannot open and change WCF adapter settings in the port through the administration console GUI and keep the urn:company/path/subpath/service1 style URI. But, as mentioned, BizTalk will happily run with it. In a follow up post I examine another option.

HTH,

/Johan

Uncategorized

Things to consider with WCF-SQL

Suppose you have a SQL Server database that contains information you need to poll from BizTalk Server. The database contains events that you need to process one by one, with the oldest events first, but without there being any real in-order delivery demands. One way to start polling these would be to have a stored procedure that looks like this:

declare @Id int

select top(1) @Id=Id from MyTable Where MyCondition = 23

update MyTable set Processed = 1 where Id = @Id

select Id, MyData from MyTable where Id = 1

The first thing to take note of is WCF-SQL Transaction Level and SQL Server locking.

Let’s do that before we even bring BizTalk into the picture.

What we can see from this is that while this procedure triggers…

One of the ways to reduce locks is to reduce transaction isolation level. This is especially important if this database is not used by BizTalk alone, but might be a very active OLTP database in itself; a backend data store to a multi-user application. The WCF-SQL adapter will default to the Serializable isolation level. You can read more about isolation levels here. In short this means that a Shared/Read lock will be placed on all data read, and an exclusive/write lock on all data changed. Another procedure can read the same data if it has a read lock on it, but cannot read the data that has a write lock on it. And another procedure cannot update any data that has a lock on it.