Tuesday 21 December 2010

moving windows7 from one partition to another

When building the laptop I use, I made (a not uncommon mistake) ... I had Vista installed and as I needed the machine, rather than "risk" an in place upgrade I installed Win7 on an unused extended partition so I could dual boot... So now I am happy with 7 and running low on disk space, which is when I realised I couldn't simply remove the vista partition and merge, because it was the primary partition...

Well that's not so hard, I grabbed a trusted copy of UBCD (Ultimate Boot CD) and using one of the free partition tools... decided to copy my 7 partition over the top of vista and remove the vista partition, for some reason I thought this sounded the simplest solution.

I had the win 7 ISO burned on a disk (for the recovery/repair wizard etc) and expected boot problems, which were duly fixed.

What I had not expected was that Windows would now boot OK but, let me log in OK, but sit for an age "preparing desktop" and then nothing, nada...

Tried running "task man" and couldn't get any exe to run, except "cmd", which is when I realised win7 now thought it was installed on "d".

This is not so uncommon problem and it didn't take long to go through the various solutions, checking diskpart and boot config via the "repair command prompt" when booted into the setup disk.

I eventually figured that I needed to clear the CURRENT MACHINE/SYSTEM/MountedDevices but the mistake I made (and really frustrating) .... As I couldn't get regedit to run in windows I tried (more than once) to clear this when running regedit booted from the repair disk.

This is a mistake - and the penny didn't drop till finding this linked from a 7-forum post.

The point is that you can only make changes when booted into the partition, unless you explicitly open the hive from disk.

Friday 16 July 2010

Patching files in the GAC, IIS, and the windowa\assembly\temp folder

We keep hitting the same odd problem on our dev server (a VM) when updating (patching) existing dll’s that are resident in the GAC…

The issue is that despite deleting the old file and then copying in the new file (typical patch), then re-starting the host, the change cannot be seen, or worse you get a “missing method” exception etc.

The cause of this appears to be associated with to the way Fusion works when an update is applied to a dll which is held by a long running process, say Biztalk or IIS…

Fusion appears to allow the old version of the GAC file to be removed (either because you’ve asked to remove it as a “step1” or because you simply dropping the new version in “over the top”) but to do so it must deal with the existing file, if it finds that something is still using the old version (a lock exists) it moves the old version to a “temp” folder, before moving the new version into the correct folder… It will try and clean up this temp folder later.


See here for more info



However this shouldn’t (in itself) cause us any problems, though it’s a good thing to be aware of…it’s supposed to work that way and it’s good.

Specifically our issues (and in the past we solved this sort of thing by “re-booting” the server) appear to be with the host which is using the file you’re trying to update, today on our dev server that has been IIS.

When you stop IIS it appears to leave some instances of w3wp (the worker host process, that runs the service/web site) running; using procexp (process explorer from sysinternals) you can see these host instances still have “file handles” pointed at the old version of the dll(s) in question, even though you can also see the new version(s) loaded.

Simply, the trick on our server has been to “iisreset /stop” twice… on the second time the w3wp processes die and the handles are released, when iis is started and a test made, the references to the old dll disappear and your code works.

I am guessing this could (and probably has for us previously) applied to Biztalk hosts. You can then clear the temp files yourself, or let fusion clear them out automatically later.

So in summary, you can “patch” files in the GAC but unless all old processes are unloaded (as you might expect) your old file will still be used, and worse a “hard” copy will exist somewhere on disk!

Temp folders:
c:\windows\assembly\tmp\
c:\windows\assembly\temp\
C:\WINDOWS\microsoft.net\Framework\v2.0.50727\Temporary ASP.NET Files\

** Note: The GAC files in question are strong named, but the new versions are differentiated by FILE VERSION ONLY, not assembly version. So the full name / qualified name of the dll is the same! **

Tuesday 1 June 2010

TFS Branching

A good resource here

and in the "ranger" TFS branching guide here

SAK reference in VS projects

And here's why

Visual Studio project GUIDS

Some care needs to be taken when a large number of people are contributing to a solution... When a VS project is created a new (unique of course) GUID is assigned to the project, when the project is part of a solution the GUID is used to reference the project by VS.

Here comes the trouble... If VS sees inconsistency, i.e. it spots a project GUID within a solution which is cannot resolve
1. Because another project exists with the same guid (in which case VS will generate a new guid for one of these projects)
2. Because another user has the same project or the same solution but either the project or the project reference in the solution has a different GUID to the one you have and they checked-in! In this case (with automatic check-out configured) VS will correct the mistake and the user may not have been aware (or not bothered to check) when they checked-in their changes.

Either way great care is required, we've seen this when people create a new project by copying-and-pasting an existing project.

If it's not noticed, or not acted upon, it can cause a nasty circle of check-in, check-out, change, check-in amongst a large group or between teams. This causes a great deal of confusion with a lot of developers not familiar with the details.

The way to fix this is to make sure all projects within a solution have unique guid, and that a project referenced in the solution has the correct guid (note because the project can be changed by VS, you have to look carefully in history to get the correct guid, which other people may still be referencing).

This situation applies to scenarios where lot's of "project references" exist in the solution and where the solution is stored in source control.

Thursday 22 April 2010

MS Build, deployment of websites

I had a bit of trouble with this, finally coming accross this blog: blog.m.jedynak.pl

Two things to watch out for with _CopyWebApplication

1. You need to resolve the references by making an explicit call to "ResolveReferences"
2. You need to specify both the output / virtual directory path *and* the web project output path, if you don't (and as noted in the linked blog post) the references will only get copied one level deep!

However, once it's working, is well worth the effort and of course invaluable for that continuous integration!!!

<MSBuild 
Projects ="%(WebProject.path)"   
Properties ="OutDir=$(DestFolder)\%(WebProject.Identity)\BIN\;
WebProjectOutputDir=$(DestFolder)\%(WebProject.Identity)\"
Targets="ResolveReferences; 
        _CopyWebApplication" 
/>




Update: We have has some issues with this approach, principle is what appears to be a bug within the ResolveReferences target.

We always build a solution containing the web services to deploy and we do this first.

However (and I am not sure exactly what triggers this) the corecompile target can be called if the framework thinks an assembly is "out of date" and needs re-compliling. As I say this shouldn't happen for us given we build the solution up-front, but sometimes it does.

When this does happen (on a project by project basis) and if it happens where a project has either a direct or indirect reference to another project AND that other project has a binary/file ref to a MS assembly - in this example System.Web.Services then we hit some trouble.

The corecompile will call the compiler directly passing in a list of commands including all the dependancies, but in this case System.Web.Services is a second level dependancy (it's not directly referenced by the assembly being built, but is by a child assembly) and it doesn't get added causing an exception.

I am still not sure if there's a fix for this, but this problem was in context of some old vs2005 projects and I have a feeling this wouldn't be an issue with 2008 and 2010.

Anyway for now we are using a Folder.Copy target (given that we manage GAC dependancies and we pre build the solution) and this is fine for us, for now.

Thursday 8 April 2010

Testing biztalk maps where the xsl calls out to deployed components

It's quite common to write an xsl which obtains values from or makes use of existing .net lib functions

Example

<xsl:stylesheet version="1.0"
 xmlns:bjg="http://ns.com/myext"
                
    exclude-result-prefixes="bjg">

    <xsl:template match="/">

        <SomeOutput>
            <xsl:value-of select="bjg:SomeMethod()"/>
        </SomeOutput>
        
    </xsl:template>
</xsl:stylesheet>

Where the extension(s) are defined in a "mapper extension" xml file (pointed from the btm map file)

<Extensionobjects>

    <ExtensionObject
       Namespace="http://ns.com/myext"
       AssemblyName="MyAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=123456abc123a123"
       ClassName="MyAssembly.MyClass" />

</ExtensionObjects>

However it's less straight forward to test the xsl outside of Biztalk.

The xslt-compiled-transform requires information relating to these assemblies along with the test xml instance.

The orginal map extensions file can be utilised to do this with the following code, which loads an xsl argument list to be applied to the xsl-compiled transform (possibly to be run from a test)

XmlDocument xmld = new XmlDocument();
            xmld.Load(mapperExtensionsFilePath);

            XmlNodeList ns = xmld.SelectNodes("/ExtensionObjects/ExtensionObject");

            // load each extension
            foreach (XmlNode n in ns)
            {
                // get attributes
                string assemblyName = n.SelectSingleNode("@AssemblyName").InnerText;
                string theNamespace = n.SelectSingleNode("@Namespace").InnerText;
                string className = n.SelectSingleNode("@ClassName").InnerText;

                // find type
                foreach (Type t in Assembly.Load(assemblyName).GetTypes())
                {
                    if (t.FullName.Equals(className) )
                    {
                        xslArgs.AddExtensionObject(theNamespace, Activator.CreateInstance(t));
                        break; 
                    }
                }
            } // get next extension

Thursday 4 March 2010

Biztalk gotcha?

BTS 2006

Exception type: TypeInitializationException
Source: MyNS.MyService

Additional error information: Field not found: 'ReferencedAssembly_.Type.Method'.

Exception type: MissingFieldException
at MyNS.MyService..cctor()

****

Biztalk Assembly 1: Assm1.dll
Biztalk Assembly 2: Assm2.dll

Assm1 references Assm2

Assm2 contains a "web reference" to a service, but in this scenario Assm1 has referenced Assm2 to access a shared type, it doesn't care about the web ref.

Now, say I updated the web ref and build Assm1, which will in turn cause Assm2 to be built...

I need to ensure *both* assemblies get deployed...

Because the web service "proxy" will have been propogated into Assm1.

This isn't a problem until the web service contract is changed - a breaking change, say a method is added.

If this happens and only Assm1 is deployed, then I'll get a missing field exception in the Assm1 constructor as the proxy definition there will be expecting a field in the proxy which doesn't yet exist on the target machine!
In order to deal with this, I'd need to deploy both assemblies.

A better idea would be to have an assembly wrapping the web reference and nothing else, so I'm only referencing this assembly if I actually need the web proxy!

Snippet of constructor in Assm1

static MyType()
{
__access = 1;
__execable = false;
_serviceId = HashHelper.HashServiceType(typeof(MyType));
_lockIdentity = new object();
_portInfo = new PortInfo[] { new PortInfo(new OperationInfo[] {
Assm2WebProxy.Method1, Assm2WebProxy.Method2, Assm2WebProxy.Method3 ....

Sunday 7 February 2010

If a class implements IDisposable then an instance of that class should be wrapped in a “using” (c# or the vb equivalent) which guarantees dispose… otherwise Dispose should always be called explicitly; if dispose is implemented the developer meant that specific cleanup logic should always execute. Usually Dispose is implemented for cleaning un-managed resources, rather than managed.

Where Close* is also implemented, then Dispose should always call Close as part of the dispose implementation and because we’re all cynics reflector is our friend – see reflected code snippets for SQL Data Reader and SQL Connection below.

Obviously with view of connection pooling, closing a connection doesn’t necessarily mean that the resource is free

Framework2.0 SqlConnection

public void Dispose()
{
    this.Dispose(true);
    GC.SuppressFinalize(this);
}

protected override void Dispose(bool disposing)
{
    if (disposing)
    {
        this._userConnectionOptions = null;
        this._poolGroup = null;
        this.Close();
    }
    this.DisposeMe(disposing);
    base.Dispose(disposing);
}


Framework2.0, SqlDataReader

public void Dispose()
{
    this.Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
    if (disposing)
    {
        this.Close();
    }
}


BTW whilst I’m in favour of understanding the inner working of objects that you code against, it’s my opinion that better code is explicit code, so I would be in favour of explicit close() for all instances where close() should be called, even if I know – as a developer – that close() is implicit.

I believe this also guards against future changes to the framework, in later versions.



* Close would usually be implemented because it’s (conceptually) cheaper to re-open a closed connection (for example) than allocate a new one which would also appear to be the consensus.

Friday 22 January 2010

Testing for default T - Generics

I came accross this SO post after discovering you cannot just write:


if ( T == default(T) ) {}


You actually need either
if ( object.Equals(value, default(T)) ) {}


or (according to the SO post more efficient - though I've not checked this)
if (EqualityComparer.Default.Equals(value,default(T)))  {}



UPDATE:

I was asked the question: “I assume that this is to avoid evaluating the reference (pointer) rather than the value?”

I think that would depend on the implementation of the compare method for the type being used and as the SO post suggests this isn’t the whole story… to take a step back…

The reason for this roundabout way of checking that the value (T) is empty is logically because T could be either value or reference type, so a method is needed which can be made to deal with both value and reference types.

As you suggest Object.Equals() is a compares pointers (references), so if T was a value type it would first need boxing (first needs to be turned into a reference type), you can see this in the IL (example1 below – a contrived and nonsense function!) …

Also note in the IL that “default(T)” statement means that T is boxed too – this will be because T can be either reference or value type, so the IL boxes to force the issue (actually this surprises me a bit, I was thinking the compiler would be able to work this out based on the use, but maybe that’s asking a lot?!).

So to go back to the original post (and a check in MSDN confirms this) the use of “EqualityComparer.Default.Equals(value, default(T))” – because the underlying implementation supports value and reference types [class/struct] - means that the IL generated no longer needs to first box (example2) and I believe that’s why it’s considered more efficient and that’s the jist of the comment in SO.

Note: I haven’t dug down any deeper, it could well be that if the code falls back to Object.Equals for a value type then boxing will still occur.


Examples:

        public static T Foo()
        {
            T r = default(T);
            int x=0;
            string y = "";
            bool result = object.Equals(x, default(T));
            return r;
        } 

(abridged)
IL_0012: box [mscorlib]System.Int32
IL_0017: ldloca.s CS$0$0001
IL_0019: initobj !!T
IL_001f: ldloc.s CS$0$0001
IL_0021: box !!T
IL_0026: call bool [mscorlib]System.Object::Equals(object, object)

        public static T Foo()
        {
            T r = default(T);
            int x=0;
            string y = "";
            bool b = EqualityComparer.Default.Equals(r, default(T));
            return r;
        }


(abridged)
IL_0001: ldloca.s r
IL_0003: initobj !!T
.
.
IL_0019: initobj !!T
IL_001f: ldloc.s CS$0$0001
IL_0021: callvirt instance bool class [mscorlib]System.Collections.Generic.EqualityComparer`1::Equals(!0, !0)

Thursday 21 January 2010

WCF Basic HTTP, XML Serializer, XmlRoot and namespaces

Example service takes a simple complex type and appends additional data to the string value:

public class TestService : ITestService
    {
        public MyType MyTestMethod(MyType obj)
        {
            obj.StringValue += "Suffix";
            return obj;
        }
    }

…And the interface definition…
[ServiceContract(Namespace = "http://servicecontract"), XmlSerializerFormat()]
    public interface ITestService
    {
        [OperationContract]
        MyType MyTestMethod(MyType obj);
    }
    [Serializable, XmlType, XmlRoot(Namespace = "http://datacontract")] 
    public class MyType
    {
        [XmlAttribute]
        public string StringValue { get; set; }
    }


Notice that the type (defined in the data contract) which is passed through MyTestMethod() has a different namespace from the service.

Also note that the service is decorated with xml serialization attributes and marked explicitly to use XML Serializer rather than the default Data Contract Serializer (DCS).

When this service is consumed within a .net client application the proxy is auto-generated (svcutil.exe), but note that the proxy definition and original definition do not match exactly, there is a subtle difference:

(Abridged definition)
[System.SerializableAttribute()]
    [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://datacontract")]
    public partial class MyType : object, System.ComponentModel.INotifyPropertyChanged {
        
        private string stringValueField;
        
        /// 
        [System.Xml.Serialization.XmlAttributeAttribute()]
        public string StringValue {
            get {
                return this.stringValueField;
            }
            set {
                this.stringValueField = value;
            }
        }
        
    }


The missing attribute is:

[System.Xml.Serialization.XmlRootAttribute(Namespace="http://datacontract")]

This does not stop the proxy from working; you can still send and receive “MyType” through the service, it will be de-serialized at the service and the client correctly:

static void Main(string[] args)
        {
            using (TestServiceClient service = new TestServiceClient())
            {
                MyType t = new MyType();
                t.StringValue = "Prefix:";
                MyType r = service.MyTestMethod(t);
            }
        }


But, if you serialize the type you will notice that the namespace is missing from the request to the service

<mytype StringValue="Prefix:" />

And from the response, from the service

<mytype StringValue="Prefix:Suffix" />


A close look at the wsdl helps to explain why
<xs:schema elementFormDefault="qualified" targetNamespace="http://servicecontract" xmlns:tns="http://servicecontract">
    <xs:import schemaLocation="http://.../?xsd=xsd1" namespace="http://datacontract"/>
    <xs:element name="MyTestMethod">
        <xs:complexType>
            <xs:sequence>
                <xs:element name="obj" type="q1:MyType" xmlns:q1="http://datacontract"/>
            </xs:sequence>
        </xs:complexType>
    </xs:element>
    <xs:element name="MyTestMethodResponse">
        <xs:complexType>
            <xs:sequence>
                <xs:element name="MyTestMethodResult" type="q2:MyType" xmlns:q2="http://datacontract"/>
            </xs:sequence>
        </xs:complexType>
    </xs:element>
</xs:schema>


Both the request “obj” and response “MyTestMethodResult” elements belong to the service namespace though the type is defined correctly in the namespace I defined for the data contract.

We understand this behaviour is part of WCF by design; it’s not a bug and should not cause too many problems provided it’s understood.

***

Out of curiosity I tried adding this simple service to BTS 2006 using “add web reference” from the project context menu, but received this error:

ERROR: Failed to add a Web Reference.

I then read this post from Saravana Kumar (with comment from our Jeremy).

I removed the service namespace, rebuilt the service and re-added the service as a “web reference” successfully, I then re-added the service namespace, re-built and tried updating the web reference in my BTS 2006 project (just to see) and got a different error, again noted in Saravana’s post:-

Could Not generate BizTalk files. Index was out of range. Must be non-negative and less than the size of the colleciton. Parameter name: index

I then altered my service definition as per Jeremy’s comments, adding a service behaviour attribute, also with the service namespace:
[ServiceBehavior(Namespace = "http://servicecontract")]
    public class TestService : ITestService
    {
        public MyType MyTestMethod(MyType obj)
        {
            obj.StringValue += "Suffix";
            return obj;
        }
    }


The service Interface stays the same, the service contract attribute stays with the namespace declared.
Important note: The namespaces are the same:

[ServiceContract(Namespace = "http://servicecontract"), XmlSerializerFormat()]
    public interface ITestService
    {
        [OperationContract]
        MyType MyTestMethod(MyType obj);
    }
    [Serializable, XmlType, XmlRoot(Namespace = "http://datacontract")] 
    public class MyType
    {
        [XmlAttribute]
        public string StringValue { get; set; }
    }



This worked OK and I was able to add my web reference to my BTS 2006 project, with the service displaying the correct service namespace.

However, the WSDL differs significantly with this change…

Here’s the WSDL before the change (BizTalk 2006 threw an error when we tried to import from this), note the target namespace of the wsdl is “tempuri”, though I had thought a service namespace had been provided in the service contract (in the service interface definition):

<?xml version="1.0" encoding="utf-8"?>
<!-- without behaviour, BizTalk unhappy!-->
<wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:i0="http://servicecontract" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:tns="http://tempuri.org/" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" name="TestService" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
    <wsdl:import namespace="http://servicecontract" location="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/?wsdl=wsdl0" />
    <wsdl:types />
    <wsdl:binding name="BasicHttpBinding_ITestService" type="i0:ITestService">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http" />
        <wsdl:operation name="MyTestMethod">
            <soap:operation soapAction="http://servicecontract/ITestService/MyTestMethod" style="document" />
            <wsdl:input>
                <soap:body use="literal" />
            </wsdl:input>
            <wsdl:output>
                <soap:body use="literal" />
            </wsdl:output>
        </wsdl:operation>
    </wsdl:binding>
    <wsdl:service name="TestService">
        <wsdl:port name="BasicHttpBinding_ITestService" binding="tns:BasicHttpBinding_ITestService">
            <soap:address location="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/" />
        </wsdl:port>
    </wsdl:service>
</wsdl:definitions>


…And here’s the WSDL with the addition of “service behaviour” attribute and namespace, note that the types are now defined and no import is required to a separately defined wsdl.

<?xml version="1.0" encoding="utf-8"?>
<!-- with behaviour, BizTalk is happy!-->
<wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:i0="http://tempuri.org/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:tns="http://servicecontract" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" name="TestService" targetNamespace="http://servicecontract" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
    <wsdl:import namespace="http://tempuri.org/" location="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/?wsdl=wsdl0" />
    <wsdl:types>
        <xsd:schema targetNamespace="http://servicecontract/Imports">
            <xsd:import schemaLocation="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/?xsd=xsd0" namespace="http://servicecontract" />
            <xsd:import schemaLocation="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/?xsd=xsd1" namespace="http://datacontract" />
        </xsd:schema>
    </wsdl:types>
    <wsdl:message name="ITestService_MyTestMethod_InputMessage">
        <wsdl:part name="parameters" element="tns:MyTestMethod" />
    </wsdl:message>
    <wsdl:message name="ITestService_MyTestMethod_OutputMessage">
        <wsdl:part name="parameters" element="tns:MyTestMethodResponse" />
    </wsdl:message>
    <wsdl:portType name="ITestService">
        <wsdl:operation name="MyTestMethod">
            <wsdl:input wsaw:Action="http://servicecontract/ITestService/MyTestMethod" message="tns:ITestService_MyTestMethod_InputMessage" />
            <wsdl:output wsaw:Action="http://servicecontract/ITestService/MyTestMethodResponse" message="tns:ITestService_MyTestMethod_OutputMessage" />
        </wsdl:operation>
    </wsdl:portType>
    <wsdl:service name="TestService">
        <wsdl:port name="BasicHttpBinding_ITestService" binding="i0:BasicHttpBinding_ITestService">
            <soap:address location="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary1/Service1/" />
        </wsdl:port>
    </wsdl:service>
</wsdl:definitions>

The difference is this: In the first wsdl types tag is empty, the types are defined in imported WSDL0 and this appears to be because the service itself has two namespace, the service namespace (as defined in the service contract) and tempuri.

When the service behaviour attribute is added to the service implementation (and it shares the same namespace as defined on the contract) then the whole service now exists under a single namespace and therefore the WSDL is complete without any imports.

Clearly in this example the type generator in BTS 2006 is seeing the empty types definition and doesn’t know to resolve the imported wsdl (wsdl=0) which would yield the type definitions.

The SOAP adapter would appear to deal with the XML Root issue OK, I believe this is because from Biztalk you pass an instance of the type, and the SOAP adapter builds the SOAP envelope so you do not have to construct the message elements directly (request “obj” and response “MyTestMethodResult, see wsdl extract above).

Here is the xml for the message I sent to the service from Biztalk 2006

<mytype StringValue="somevalue" xmlns="http://datacontract" />

And here is the xml from the message once it’s passed through the service from BTS:-

< mytype StringValue="somevalueSuffix" xmlns="http://datacontract" />


Here is the schema that describes the type (MyType) this is the “Reference.xsd” file which the web message types reference in BTS, note that somehow(?) BTS knows to create the type “properly”

<?xml version="1.0"?>
<xs:schema xmlns:tns="http://datacontract" elementFormDefault="qualified" targetNamespace="http://datacontract" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="MyType" nillable="true" type="tns:MyType" />
  <xs:complexType name="MyType">
    <xs:attribute name="StringValue" type="xs:string" />
  </xs:complexType>
</xs:schema>


The "somehow" is the bit that maybe we should take more time to understand, but there is always a balance in how far to analyze!

Related: Yossi post on SO