Label Cloud

Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Thursday, October 08, 2009

I am here to help... really

“Why are these guys messing with my design. What makes them think they know better, especially since they don’t actually produce anything?!” As a developer, these are pretty often the first thoughts I used to say when I had to deal with an “Architect”. Now I hear them back. Am I really a Pig…. I certainly hope not.

I am making a reference to an excellent post “Chickens, Chicks, Pigs and Piglets”. My latest interaction with the business development team started out very much like that. It pointed to the problem with many non-business aligned technology teams (such as architecture). We need to prove to others that we actually help the their teams.

It doesn’t matter how good you are, how good your designs are, or how new your technology is. The only thing that counts is “does it help the other team deliver their product and to run it after delivery.” At the end, we are there to support the other business technology teams. It is an extremely important point to remember and to follow through on.

So back to my latest project. I think its back on track after extra effort on both sides of the table. The most important thing to remember: “Commitment to the project”. It is not the technology, or the ideology that should be the center of attention. It should be the project. As an architect, Only after you can show to others that you are there to help their projects, not to derail them by forcing them to go after “The Vision” are you able to successfully influence and deliver that vision.


Share/Save/Bookmark

Monday, August 31, 2009

Source Control Management 201 - Repository Design for efficient code management using any source control

Source control management is an essential part of the development. It should be just as critical part of the developer’s toolbox as a good text editor. There is no project that is small enough not to deserve one. There is no such thing as bad source control management system. While there are probably hundreds SCM systems out there, some are more user developer friendly then others. Some of the more common ones you hear about today are Subversion, CVS, Git, Visual SourceSafe, Rational ClearCase.

A goal of a source control system is to answer the following questions:

  • What code is currently running in production?
  • What is the latest code a developer should be looking at?
  • What code was used to compile version X.XX.XXX?
  • How can a developer make changes without affecting other developers?
  • What changed between version X.XX.XXX and X.XX.XXY?
  • How can a developer incorporate changes between X.XX.XXX and X.XX.XXY to make X.XX.XXZ?
  • How can a developer rollback changes from X.XX.XXY to get back to X.XX.XXX?

All source control versions that I’ve worked with can answer all of the above questions. It does take some organizational skills on the developer to allow it to do so. SCM system stores code in a repository. It is up to the developer and SCM team to organize the layout of the repository. Some of the more common strategies are Trunk Focused

clip_image002

All development is done on the trunk. When the code is stable enough to be ready for QA/Production Testing it is brunched into a Beta/Release Candidate branch. The trunk version is incremented. The RC branch should have only minor bug fix related changes applied to it. All major changes are done in the trunk. As bugs are fixed in the RC branch, the changes are migrated into the trunk. When a version is released, the RC branch is moved into a Release branch. CHANGES ARE NEVER MADE IN THE RC BRANCH. IT IS READ ONLY! If a bug fix must be made to the already released version, a branch is created, and a change is done on the branch. The fix is then merged into the trunk.

This repository allows to easily answering developer’s question

  • What is the latest code I should develop from: Trunk
  • What code is currently running in production: Latest Read Only branch
  • How to make changes to version X.XX.XXX: Make a new branch from the version X.XX.XXX. After finishing your changes, merge them into the trunk.

Another strategy is to organize the repository around production version.

clip_image002[12]

Main trunk has the currently released production version. The trunk IS READ ONLY! For development, a branch is created, and development is done on the branch. After the version is released to production, the trunk is replaced with the copy of the released branch, and is again made read only. To develop the next version, another branch is created. If a fix has to be made to the production release, a branch for the fix is created. Once fix is released, a new trunk is created from the fix branch. The changes are merged into the branch under current development.

This layout answers the same questions slightly differently:

  • What code is currently running in production: Trunk
  • What is the latest code I should develop from: Latest branch
  • How to make changes to version X.XX.XXX: Make a new branch from the version X.XX.XXX. After bug fix is released into production, the branch is copied and becomes the new trunk. Changes are merged into the latest development branch.

There are other repository layouts available, and / or your team might make design changes around above structures.


Share/Save/Bookmark

Sunday, August 09, 2009

Multithreading will only take you so far

Working with Oracle Coherence, I do a lot of thinking about distributed architecture, parallel processing, and multithreading. Making use of all this technology is a great way to solve many problems. It can often seem, that as long as you can split your problem into small enough pieces, you’ll be able to process data instantaneously.

When thinking about distributed system, we often forget that making a system distributed, we still have a limited number resources to distribute the workload. The application is deployed on a specific number of machines, each with a specific number of CPU Cores. Each CPU core can process one instruction at a time (not completely true, but for simplicity’s sake). For example: In a distributed system where each workload takes 1 second to process, and we have a total of 10 workloads that need to be processed, will take at least 3 seconds to process all of them

image

Adding more threads will not work, only adding more CPU cores will. That is critically important when you consider that for many complex operations, a result of individual workload is not enough to provide a meaningful result to the end user. Results from all requested workloads have to be aggregated to create a final result.  This makes it relatively simple to figure out how much a calculation will take:

Total Time = Number of Work Loads / Number of Cores * Time Take by Work Load

Another very import point to remember, is that during performance testing, distributed systems behave differently. Requests from multiple clients will interfere a lot more with each other then they do in a straight processing system. In a prior example, a request that takes 3 seconds when ran from 1 client, can take 6 seconds with 2 clients, if the last work item for the first client, is started after all work items of the second client.

image

That will probably not happen, yet you have take the possibility into account. That is just the nature of the beast. 


Share/Save/Bookmark

Monday, June 29, 2009

Oracle Coherence SIG - Presentation and sample code

Oracle Coherence SIG came around. Excellent event. It was twitted live, so go ahead and search : http://search.twitter.com/search?q=%23nycsig

I spend around 30 minutes talking about using Coherence with .Net. First part of the presentation was on general Coherence configuration and setup. Second part was on more advanced use of serialization wrappers and LINQ to Coherence. People asked some great questions. I guess the one important point I wanted to reiterate is that you should not be afraid of using Coherence with .Net. It is a really great product. You will definitely get a lot of benefit out of the box with it. The wrappers I’ve provided will create some overhead, but that should be acceptable to many. Wrappers and LINQ are not a solution that will solve every .Net developers problem, but should definitely get you started.

I’ve uploaded the presentation and the samples (in 4MB large zip file).

Presentation http://tfanshteyn.110mb.com/CoherencePresentation.pdf
Sample Files http://tfanshteyn.110mb.com/CoherencePresentation.zip


Share/Save/Bookmark

Tuesday, April 07, 2009

Linq to Coherence + Attributes + MetaData = Cool

First, we got the Linq Provider for Oracle Coherence

Second, we got attribute based serialization

Third, Metadata in the serialization stream

Once we put all that together, what we get is a set of very clean way of storing and querying data in Oracle Coherence.

A Coherence Linq provider now supports passing a CoherenceQueryTranslator as a parameter. I am providing a MetadataCoherenceQueryTranslator that uses getProperty method to access property originally serialized by the Generic Serializer. Here’s almost all relative .Net Code:

Person Class

[POFSerializableObject(StoreMetadata=true)]
    public class Person// : IPortableObject
    {
        [POFSerializableMember(Order=0,WriteAsType=POFWriteAsTypeEnum.Int16)]
        public int ID { get; set; }
        [POFSerializableMember(Order=1)]
        public string FirstName { get; set; }
        [POFSerializableMember(Order = 2)]
        public string LastName { get; set; }
        [POFSerializableMember(Order = 3)]
        public string Address { get; set; }
        [POFSerializableMember(Order = 4)]
        public string Title { get; set; }
        public Person()
        {
        }
    }
Add object function
INamedCache cache = CacheFactory.GetCache("dist-Person");
for (int i = 0; i < 1000; i++)
{
cache.Add(i, new Person()
{
   ID = i,
   FirstName = string.Format("First Name {0}", i),
   LastName = string.Format("LastName {0}", i),
   Address = string.Format("Address {0}" , Guid.NewGuid()) ,
   Title = i % 2  == 1 ? "Mr" : "Mrs"
   });
}

Query using Linq Query:

CoherenceQuery<Person> coherenceData =
 new CoherenceQuery<Person>(
	 new CoherenceQueryProvider(CacheFactory.GetCache("dist-Person"), 
		 new MetadataCoherenceQueryTranslator()));
string likeClause = "%8";
var people = from person in coherenceData
			 where
				(person.FirstName.Like("Test")
				 || person.LastName.Like(likeClause))
				 && person.Title == "Mrs"
			 select new { person.Title, person.ID, person.LastName };
IFilter filter = ((ICoherenceQueryable)people).Filter;
dataGridView1.DataSource = people.ToArray();

Internally, MetadataCoherenceQueryTranslator, will convert the linq query into a filter and execute the query against the Java POFGenericObject


Share/Save/Bookmark

Monday, April 06, 2009

99.9% pure .NET Coherence (100 % No Java Code Required)

Its not 100% .NET Coherence, since a developer is required to modify configuration files on the java service to specify the generic POF Serializer.

So to start: The .NET Class:

 

    [POFSerializableObject(StoreMetadata=true)]
    public class Person
    {
        [POFSerializableMember(Order=0,WriteAsType=POFWriteAsTypeEnum.Int16)]
        public int ID { get; set; }
        [POFSerializableMember(Order=1)]
        public string FirstName { get; set; }
        [POFSerializableMember(Order = 2)]
        public string LastName { get; set; }
        [POFSerializableMember(Order = 3)]
        public string Address { get; set; }
        [POFSerializableMember(Order = 4)]
        public string Title { get; set; }
        public Person()
        {
        }
        
    }

Note the StoreMetadata=true argument. When StoreMetadata is specified, Generic serializer will first first write a string array of property names. On server side, we must specify a Generic Java serializer. This is needed to be able to store objects for filtering. Here’s an interesting note. Unless filters are invoked, the object WILL NOT be deserialized on the java side. That means, Put, Get, GetAll calls without a filter, do not require Metadata to be written into the cache. Now. To specify Java Generic Serializer. Distributed Cache Configuration:

      <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <serializer>
		<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
		<init-params>
		  <init-param>
			<param-type>string</param-type>
			<param-value>custom-types-pof-config.xml</param-value>
		  </init-param>
		</init-params>
	  </serializer>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

custom-types-pof-config.xml

<pof-config>
  <user-type-list>
    <!-- include all "standard" Coherence POF user types -->
    <include>coherence-pof-config.xml</include>
    <!-- include all application POF user types -->
    <user-type>
      <type-id>1001</type-id>
      <class-name>com.Coherence.Contrib.POF.POFGenericObject</class-name>
      <serializer>
        <class-name>com.Coherence.Contrib.POF.POFGenericSerializer</class-name>
        <init-params>
           <init-param>
             <param-type>int</param-type>
             <param-value>{type-id}</param-value>
           </init-param>
           <init-param>
            <param-type>boolean</param-type>
             <param-name>LoadMetadata</param-name>
             <param-value>true</param-value>
           </init-param>
         </init-params>        
	  </serializer>
    </user-type>
  </user-type-list>
</pof-config>

For now, you must specify a user-type for each .Net object. On the java side, the server will be using the POFGeneicSerializer, and all values in the object array indexed by the property names. A generic getProperty method is implemented to allow filtering on any property that was used in the serialization. Property evaluation is happening on the server, so only filtered data is returned.

Here’s a simple loop to add an object into the cache

INamedCache cache = CacheFactory.GetCache("dist-Person");
for (int i = 0; i < 1000; i++)
{
    cache.Add(i, new Person()
    {
       ID = i,
       FirstName = string.Format("First Name {0}", i),
       LastName = string.Format("LastName {0}", i),

       Address = string.Format("Address {0}" , Guid.NewGuid()) ,
       Title = i % 2  == 1 ? "Mr" : "Mrs"
    });
}

A cool side effect, data can be accessed from .Net and from Java code in the same fashion.


Share/Save/Bookmark

Tuesday, March 17, 2009

Linq provider for Oracle Coherence (Linq to Coherence)

I am very impressed with Coherence from Oracle. Coherence provides a distributed in-memory cache and processing fabric. However it is a lot more then just a cache. It can be used for everything from messaging to cross platform communication medium. There is too much to talk say about it, so read more information at Oracle: http://www.oracle.com/technology/products/coherence/index.html

Coherence works very nicely with .Net however, in the days of Linq, I wanted to write a Linq provider for it. My code is based largely on the Linq provider documentation on MSDN (http://msdn.microsoft.com/en-us/library/bb546158.aspx) and excellent series on creating a linq provider by Matt Warren (http://blogs.msdn.com/mattwar/pages/linq-links.aspx)

I am using Google Code to host the project under Artistic License. Please check out the full source code at http://code.google.com/p/linqtocoherence/.

Below is a rundown on two main classes. The main part of the code that deals with Coherence is in two classes CoherenceQueryProvider and CoherenceQueryTranslator.

CoherenceQueryProvider accepts a connection to the INamedCache – a reference to coherence cache that will be queried.

public class CoherenceQueryProvider  : IQueryProvider
{
   public INamedCache Cache { get; set; }
   public CoherenceQueryProvider ()
    {
    }

   public CoherenceQueryProvider(INamedCache cache)
   {
       Cache = cache;
   }

In the Execute method, CoherenceQueryProvider translates the Where clause to a Coherence Filter and executes the filter against the Cache objects to return array of values.

public object Execute(Expression expression)
{
  if (Cache == null)
      throw new InvalidOperationException("Cache is not properly set");

  // Find the call to Where() and get the lambda expression predicate.
  InnermostWhereFinder whereFinder = new InnermostWhereFinder();
  MethodCallExpression whereExpression = whereFinder.GetInnermostWhere(expression);
  LambdaExpression lambdaExpression = (LambdaExpression)((UnaryExpression)(whereExpression.Arguments[1])).Operand;

  // Send the lambda expression through the partial evaluator.
  lambdaExpression = (LambdaExpression)Evaluator.PartialEval(lambdaExpression);

  IFilter filter = new CoherenceQueryTranslator().Translate(lambdaExpression);

  object[] data = Cache.GetValues(filter);
  Type elementType = TypeSystem.GetElementType(expression.Type);
  return data;
}


CoherenceQueryTranslater uses the visitor pattern to convert the Linq Expression from the where clause to Coherence Filter. Coherence filters are nested to converting one to the other is relatively simple

protected override Expression VisitBinary(BinaryExpression b)
{
  this.Visit(b.Left);
  object lastGlobal1 = globalFilter;
  this.Visit(b.Right);
  object lastGlobal2 = globalFilter;
  switch (b.NodeType)
  {
      case ExpressionType.AndAlso:
          globalFilter = new AndFilter((IFilter) lastGlobal1, (IFilter)lastGlobal2);
          break;
      case ExpressionType.OrElse:
          globalFilter = new OrFilter((IFilter) lastGlobal1, (IFilter)lastGlobal2);
          break;

There is a lot more code in the classes to handle other filters, but a lot of it is pretty repetitive. The work on the linq provider is not done and I still have to implement some of the coherence functionality. Full code and usage sample is available on google code http://code.google.com/p/linqtocoherence/

Check it out and post your comments / suggestions.


Share/Save/Bookmark

Wednesday, March 04, 2009

noop.nl - Top 50 New Software Development Books and other lists

I generally don’t link to other blog entries since that doesn’t add that much value to people. However, this post is not regarding the specific blog entry. Jurgen has an excellent blog dedicated to software development and management of development teams.

He also created excellent lists of TOP *EVERYTHING*. The last one published is Top 50 New Software Development Books. Other lists are at http://www.noop.nl/top-lists/

This is one of the blogs I would definitely recommend subscribing to.

Technorati Tags: ,,


Share/Save/Bookmark

Wednesday, February 11, 2009

Production Debugging a Memory Leak

I wrote before about not believing in regular system reboots. One of the services we wrote had a serious memory leak and process size grew over 1GB within 2 days requiring us to perform regular service restarts. This is not something that we were able to replicate in development or QA environment so I’ve decided to do some production debugging.

I love reading the blog of Tess Ferrandez on low level .NET Debugging. http://blogs.msdn.com/tess. The has a series walk trough sessions one of them is on Memory Leaks http://blogs.msdn.com/tess/archive/2008/03/25/net-debugging-demos-lab-7-memory-leak.aspx

I can’t really provide the original code for our service, but I was able to replicate the basic leak in a sample app, and below are steps to find out what it is.

Sample (on skydrive.live.com)

LeakyCache.zip-download


Sample Setup: Open LeakyCache.zip  Compile it if you want, or just run the included executable. Click “Leak” to leak memory.

LeakyApp

  1. Download Debugging Tools for Windows form Microsoft and install it on the server that is running the problem application.
  2. Copy SOS.DLL from “C:\Windows\Microsoft.NET\Framework\v2.0.50727” to “c:\Program Files\Debugging Tools for Windows (x86)” to get access to debugging library for .NET 2.0
  3. Execute ADScript to take a memory dump of the LeakyCache application
    "c:\Program Files\Debugging Tools for Windows (x86)\adplus.vbs" -hang -pn LeakyCache.exe -o c:\temp\LeakDump

LeakyAppDump

  1. Start WinDbg
    "c:\Program Files\Debugging Tools for Windows (x86)"\windbg
  2. From the File Menu, select “Open Dump File” and open the created dump file from C:\temp\LeakDump\
  3. Load SOS debugging using command
    .load SOS

Now the fun begins :)

  1. Run !dumpheap –stat
     DumpHeap
    What you’ll see is that the most memory is used by data type is System.String (53MB) and Dictionary+Entry (22MB). Also notice that there are more then 1 million string entries. Most of them are very small (<55 bytes average).
  2. To see the entries. Use command  (Press CTRL+BREAK to stop the flow) to see the list of addresses.
    !dumpheap -type System.String -max 100
    !do 022b8978 
    DO
    Substitute the address of one of the items instead of the 022b8978
    I underlines a Text String that you can see. In my experience of debugging my apps, based on the data, I can tell what is stored, and probably have some ideas about where that data is generated or should it be cleaned.
  3. Run !gcroot [Reference] to see exactly what class is holding a reference to the object
    GCRoot

A walkthrough like this will not necessary solve a problem in the application, but it can point out to a possible issue in the application that can be solved. To me, a memory leak is not a problem that should be ignored, but is a bug that can be fixed.

Note: Huge thanks to Tess for the wonderful blog http://blogs.msdn.com/tess

Technorati Tags: ,,


Share/Save/Bookmark

Monday, February 09, 2009

Keep release PDBs to help with debugging production problems

Not many developers know that PDB files are generated during release builds are just as helpful as they are in debug builds.

For some background information, PDB Files contain debugging symbols that are used by .NET Debuggers (including Visual Studio) to lookup source code information. If symbols are available, debugger will be able to show not just the function where exception happened, but also the line number in the source file where exception occurred.

Currently, our current build process copies results of every build into a separate output folder, away from the source code. A new step was just added to make a copy of PDBs into a subfolder as well. Here’s a snippet of XML that I’ve added to the .csproj target

    <CreateItem Include="$(TargetDir)\*.pdb">
      <Output TaskParameter="Include" ItemName="PDBFiles" />
    </CreateItem>
    <Copy SourceFiles="@(PDBFiles)" 
            DestinationFolder="$(OutputPath)\PDB" />

One way to use PDB files is to provide them with your application. If PDB file is available at the time of exception, Exception information will include line numbers and source code file name in the exception.

You can also debug release versions of the executable using Visual Studio. From the Tools menu, select “Attach to Process”, Select your executable. After debugging session starts, In Debug menu, Window->Modules, Right click on the module for your executable, and select “Load Symbols From”. Point to your PDB file, and you are done. It will be important to have source code available if you want to step through. That however is a completely different issue.

Technorati Tags: ,,


Share/Save/Bookmark

Monday, January 19, 2009

CAPICOM.dll Removed from Windows SDK for Windows 7

Its not that often that I hear that of a system component of Windows SDK being removed from a future version of windows. As the matter of fact, this is the only time that I know off (I am sure it happened before)

Karin Meier from Windows SDK Team announced on his blog that CAPICOM is now considered to be depreciated and is providing alternatives at http://msdn.microsoft.com/en-us/library/cc778518(VS.85).aspx

Technorati Tags: ,


Share/Save/Bookmark

Wednesday, January 07, 2009

Benefits and Hindrances of Regular Server Reboots

First of all, stackoverflow.com is very, very cool. I’ve talked about it before, and would like to reiterate the point. The site gets tremendous amount of traffic and is great for asking any development questions or starting technology related discussions.

Now to the main point.

Over the years that I’ve doing software development and architecture, I had a chance to work directly on server and data center architecture. One of the most important points of the software and hardware design was stability, which is generally measured in amount of uptime. We’ve spent a lot of time looking for memory and other resource leaks. Servers needed to be designed with the same resiliency in mind.

However, I’ve also worked with IT managers with extensive experience, who followed a different paradigm: Weekly, controlled reboots of all servers. I looked around and the policy is not at all uncommon:

I brought this question to Stack overflow for more comments, please check them out there.

http://stackoverflow.com/questions/410413/benefits-and-hindrances-of-regular-server-reboots

Some consider this a Foolish Policy, For Others, this is a weekly test that is good if you can afford it. Even though I definitely see benefits of testing startup scripts and resource cleanups, I would have to stand behind my original view: Reboots for the sake of rebooting are an overkill, adds downtime and waists resources personnel resources. Scheduled maintenance windows for server maintenance (hardware and software) and are a completely different story.


Share/Save/Bookmark

Sunday, December 28, 2008

Stack Overflow

The site StackOverflow.com is available for a few months, but I just recently discovered it. It is created by Joel Spolsky, Jeff Atwood, Jarrod Dixon, Geoff Dalgas, Jeremy Kratz, and Brent Ozar.

The idea behind a site is a Question / Answer forum for all things development related. Not language specific at all. Definitely recommended place to visit.

http://www.stackoverflow.com

Technorati Tags: ,,


Share/Save/Bookmark

Monday, December 08, 2008

Incremental Shortcuts in Eclipse

I am fairly new to Eclipse, but the more I use it, the more I like it.

My latest discovery is how efficient it is to use of shortcuts to find "stuff" in Eclipse. The lookups are incredibly fast and are very useful.

There is Open Type (CTRL+SHIFT+T) and start typing

image

Open Method (CTRL+O) and start typing

image

The typing part is critical. It is available even in preferences setup.

image

I really wish this functionality would exist out of the box in Visual Studio.


Share/Save/Bookmark

Thursday, June 26, 2008

Dropping Visual Studio Unit Testing for NUnit

Continuing with my adventures in porting java code to .Net, I am dealing with moving JUnit unit tests to .NET. My first move thought was to convert to Microsoft Unit Testing framework. However, after spending quite a bit of time to get it to work, I gave up and switched to NUnit.

Two things that are NOT supported by the Microsoft framework and forced me to switch

    • Per-Test StartUp/TearDown functions. [TestInitialize] and [TestCleanup] are not called for every [Test]. They are called when the Test class is initialized. That means its very hard to have a good Initialization and Cleanup routines for every test.
    • Lack of Test Inheritance. I a set of unit tests that test different implementations of an interface. The core test of the interface must be the same to make sure that all implementations handle the core identically. However, There might be special additional tests to validate extended functionality. Currently, you can not accomplish that with Inheritance.

Both are fully supported by NUnit.

Technorati Tags: , ,


Share/Save/Bookmark

Saturday, May 31, 2008

1st Java annoyance

For the last month I've spent as much time writing Java code as C# code. And its definitely been a great learning experiences. Even though the core languages are very much alike, and you can usually find a function in .NET that corresponds to Java and the other way around, I've spent quite a bit of time yesterday trying something that should have been completely trivial.

The Problem:

having a Date variable loadDate that includes Date and Time, create two variables startDate and endDate where StartDate is the portion of the loadDate, and endDate is the startDate + 1 day

C# Code:

DateTime loadDate = DateTime.Now; DateTime startDate = loadDate.Date; DateTime endDate = startDate.AddDays(1);

Java Code#

GregorianCalendar cal = new GregorianCalendar( loadDate.getYear() + 1900, loadDate.getMonth(), loadDate.getDate()); Date startDate = cal.getTime(); cal.add(Calendar.DATE, 1); Date endDate = cal.getTime();

Why is the Date.getYear() function returning 108 for a year 2008? What is the logic behind that? Are the Java developers afraid of running out of integer values?

Why the Calendar class supplies the clearDate() function, but no clearTime() function?

Why I can't dd Dates the way I can other classes?

Why the Calendar.add() function doesn't return a result instead of replacing the internal value, the way other classes do?

Why is this not documented in the Date class?


Share/Save/Bookmark

Friday, January 25, 2008

WCFTestClient - a testing utility from Visual Studio 2008

I stumbled upon an excellent utility for WCF Testing that comes with Visual Studio 2008 - WCFTestClient.

The tool is an simple way to test WCF clients HTTP and TCP bindings. Some things are not supported, however, for basic WCF Testing, this definitely beats the old ASMX test page.

Note: Also check out the WCFSvcHost utility from Visual Studio to host an arbitrary WCF Service.


Share/Save/Bookmark

Thursday, January 17, 2008

.NET 3.5 Source is now available

Everyone's talking about it. The source for .NET 3.5 framework is available for debugging. Read full instructions on how to set it up here

http://blogs.msdn.com/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx

Technorati Tags: , ,


Share/Save/Bookmark

Thursday, January 10, 2008

Customizing SCSF Guidance Package for Modular Development

One of the requests that I've received from other developers is the ability to use SCSF for developing a module without including the shell in the solution. We develop a large number of modules independently in different groups and having the shell be a part of every module was getting to be a problem.

The only issue that I was getting with getting this to work was that SCSF guidance package would fail in ViewTemplateCS when I would right click on a folder and tried to add a new view to the project.

To solve the issue, I was made a small tweak to the source in the ViewTemplateReferenceCS.cs. (The code comes with the SCSF, however, you will have to install it separately after the SCSF is installed) The culprit is the function ContainsRequiredReferences(Project project) Specifically the call to ContainsReference(project, prjCommon.Name)

Since the common project is not in the solution, the call failed with Null Reference exception. All I had to do was to change the last line of the function to be ContainsReference(project, "Infrastructure.Interface"); Then recompile the GuidancePackage solution and place the Microsoft.Practices.SmartClientFactory.GuidancePackage.dll into the C:\Program Files\Microsoft Smart Client Factory\Guidance Package folder.


Share/Save/Bookmark

Monday, January 07, 2008

Fixing WCF/WPF VS 2005 Extensions installation after installing VS 2008 or .NET 3.0 SP1

I've encountered a problem trying fix the WCF / WPF Visual Studio 2005 Integration components after I've installed Visual Studio 2008.

Installing a VS 2008 will install .NET 3.0 SP1 and remove the installation of .NET 3.0. When trying to install the WCF / WPF Extension, installation display's a message

Setup has detected that a prerequisite is missing. To use Visual Studio 2005 extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP you must have the .NET Framework 3.0 runtime installed. Please install the .NET Framework 3.0 runtime and restart setup

You really can't install .NET 3.0 since a newer version (.NET 3.0 SP1) is already installed.

I found a solution on the MS Forums http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2550726&SiteID=1

It involves either going creating a registry key to full the installers into thinking that SP1 is installed. To fix the issue, add the following value to the registry:

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{15095BF3-A3D7-4DDF-B193-3A496881E003}] "DisplayName"="Microsoft .NET Framework 3.0"

Thanks Erich for the solution.

Note: I got a comment that this can also be forced using command line: msiexec /i vsextwfx.msi WRC_INSTALLED_OVERRIDE=1

Thanks


Share/Save/Bookmark
Directory of Computers/Tech Blogs