Tuesday, October 25, 2011

Azure - The imported project microsoft.windowsazure.targets was not found

If you receive the following error:
"The imported project microsoft.windowsazure.targets was not found"
Two possible solutions.

1) You need to update your "Windows Azure Tools for Visual Studio" with the 1.4 August update.

       http://www.microsoft.com/windowsazure/sdk/

       Windows Azure Tools for Visual Studio (August 2011 update)

2) Or you are using the wrong targets if you wish to stay with the Windows Azure Tools for Visual Studio, March 2011 release.

Replace:
<Import Project="$(CloudExtensionsDir)Microsoft.WindowsAzure.targets" />

With:
<Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" />

and

Replace:
<CloudExtensionsDir Condition=" '$(CloudExtensionsDir)' == '' ">$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Windows Azure Tools\1.4\</CloudExtensionsDir>

With:
<CloudExtensionsDir Condition=" '$(CloudExtensionsDir)' == '' ">$(MSBuildExtensionsPath)\Microsoft\Cloud Service\1.0\Visual Studio 10.0\</CloudExtensionsDir>

Then reload the project in your solution, or add existing project.

Wednesday, August 31, 2011

Connect to SQL Azure from Matlab

1. Download the SQL Server JDBC Driver 3.0, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and above. This version of SQL Server JDBC Driver 4.0 adds support for SQL Azure.
2. Execute it, and it will be decompressed to the selected location

3. Create folder c:/SQLJDBC and place sqljdbc_3.0 inside it.

4. Start the Notepad application as administrator and open the file:

C:\Program Files\MATLAB\R2011b\toolbox\local\classpath.txt

5. Add a reference like this to the JDBC driver after the last line of that file:

c:/SQLJDBC/sqljdbc_3.0/enu/sqljdbc4.jar

6. After that you have to restart MATLAB and then you can test the connection:

dbConn = database('master', 'user', 'password', 'com.microsoft.sqlserver.jdbc.SQLServerDriver', 'jdbc:sqlserver://sqlazurepath;databaseName=master;');

7. To test the connection you can execute:

ping(dbConn);

Tuesday, July 12, 2011

Azure Traffic Manager - Keep Alive and Round Robin

As you know Azure came out with the Traffic Manager which is essentially another load balancing layer on top of their standard round robin based load balancing layer.
Before the Traffic manager one workaround was to use RoleEnvironment.StatusCheck event to get the latest state of your service, and call SetBusy to take a particular instance out of load balancer queue for 10 secs. Similarly, there were few other work a rounds that required you to keep track of what your VMs are doing, and accordingly re-direct the traffic to prevent any one instance from being bogged down and become unresponsive.
But yet still no load balancing technique that actually balances by some load diagnostic such as cpu usage.
So what happens in the following case:

Normally if you have multiple requests coming in from the same ip address, and have "Keep Alive" set on, all those requests would be sent to just one instance, right?

Yes, and this round robin technique might be ok for a standard website, but not for an application where tons of data is being sent to it from many single point locations, and must be optimized for speed.

Taking keep alive off would solve the problem but each request would have to establish a new connection and the performance hit is about 10 fold.

The good news is that this is a big difference with the Traffic Manager round robin pattern. For some reason we get the performance gain of setting  "Keep Alive" on, and the round robin is dispersed by request and not by source location, which basically simulates a cpu usage based load balancing technique. Hooray!

Azure Traffic Manager

Windows Azure Traffic Manager is a new feature that allows customers to load balance traffic to multiple hosted services. Developers can choose from three load balancing methods: Performance, Failover, or Round Robin. Traffic Manager will monitor each collection of hosted service on any http or https port. If it detects the service is offline Traffic Manager will send traffic to the next best available service. By using this new feature businesses will see increased reliability, availability and performance in their applications.

The Windows Azure Traffic Manager CTP is now available. We would like you to try it out and provide feedback. During the CTP period, Windows Azure Traffic Manager is free of charge, and invitation-only. To request an invitation, please visit the Beta Programs section of the Windows Azure Portal.

Thursday, June 23, 2011

Azure inbound data will be free starting July 2011

To highlight…

Microsoft announced a change in pricing for the Windows Azure platform that will provide significant cost savings for customers whose cloud applications experience substantial inbound traffic, and customers interested in migrating large quantities of existing data to the cloud. For billing periods that begin on or after July 1, 2011, all inbound data transfers for both peak and off-peak times will be free.


Data in/Bandwidth in = free. 

Rock it Azure!

Friday, June 10, 2011

Azure Table Storage - Selecting a partial range with ID's used in Partition and Row Keys

To explain,  let’s say we have a row key comprised of SomeID and SomeOtherID delimited by a underscore, and we have the following set of records in a table:
1_1
2_3
10_4
100_5

If we wanted to get all records where SomeID is 1, we would use the “compareto” operator to retrieve a partial matching range.  Sample code shown here:

startRowKey = “1_”
endRowKey = “2_”

        public IList<T> GetAllByPartitionKeyAndRowKeyRange(string partitionKey, string startRowKey, string endRowKey)
        {          
            CloudTableQuery<T> query = (from c in this.CreateQuery<T>(TableName)
                                        where c.PartitionKey == partitionKey
                                        && c.RowKey.CompareTo(startRowKey) >= 0
                                        && c.RowKey.CompareTo(endRowKey) < 0
                                        select c).AsTableServiceQuery();
           
            query.RetryPolicy = this.RetrySettings;
            IEnumerable<T> results = query.Execute();

            return ConvertToList(results);
        }

This would produce the following result set because of azures lexicographically ordering, as is compares one character at a time thus making 10 less than 2:
1_1
10_4
100_5

The only solution I could come up with was to build keys with the same character length and prepend x number of 0’s, or zero padding, for each id (we used 36, as that's a guid length). For ex, when inserting or selecting, an numerical id within a key would look like the following:
               0...00001

This gives our set of records to look like the following:
0...00001_0...00001
0...00002_0...00003
0...00010_0...00004
0...00100_0...00005

Which will now produce an accurate result set when queried.

Again this is in the situation that you have ID's in your Partition and Row Keys so using the property bag is not a viable solution to get around this. 


And remember not to do a partial look up on a Partition Key, that results in a regular old table scan.

This was a tough one, hope this helps!

Tuesday, June 7, 2011

Azure Table Storage - don’t use forward slash / character in PartitionKey or RowKey

 You’ll get the following error: “One of the request inputs is out of range.”

Characters Disallowed in Key Fields:

The following characters are not allowed in values for the PartitionKey and RowKey properties:
  • The forward slash (/) character
  • The backslash (\) character
  • The number sign (#) character
  • The question mark (?) character 
http://msdn.microsoft.com/en-us/library/dd179338.aspx

Friday, June 3, 2011

Connect to SQL Server from Matlab

Alternatively you can create and use an ODBC connection, but if you would rather not do that for every database and server you wish to connect to, you can do the following using the JDBC driver.

1. Download the JDBC driver from Microsoft from:


2. Execute it, and it will be decompressed to the selected location

3. Create folder c:/SQLJDBC and place sqljdbc_2.0 inside it.

4. Start the Notepad application as administrator and open the file:

C:\Program Files\MATLAB\R2010a\toolbox\local\classpath.txt

5. Add a reference like this to the JDBC driver after the last line of that file:

c:/SQLJDBC/sqljdbc_2.0/enu/sqljdbc4.jar

6. After that you have to restart MATLAB and then you can test the connection:

dbConn = database('master', 'user', 'password', 'com.microsoft.sqlserver.jdbc.SQLServerDriver', 'jdbc:sqlserver://localhost:1433;databaseName=master;');

7. To test the connection you can execute:

ping(dbConn);

8. Please note that the local instance of SQL Server requires you to have the TCP/IP protocol network connections enabled.

----------------

Other references:   http://www.mathworks.com/help/toolbox/database/ug/database.html

Thursday, April 7, 2011

Azure dev fabric "Invalid directory in handler configuration" on 32bit machine

Cause:
This type of error can be for any invalid directory due to an empty folder in your web role project.  When your cloud project is your start up project it makes a “lets say…copy” of you web role project to the bin directory of the cloud project.

In our case the asp.net charting temp folder is empty, so it does not create the temp folder in the bin when ran, thus resulting in this error.

This has something to do with the mixture of the following:
·         32bit machine
·         Read only access when ran
·         The empty folder
·         And a handler in your web.config with a specified folder location

Solution:
Create the folder manually.

Wednesday, March 9, 2011

Azure "SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used"

When you get the following exception: 

SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used

A work around is to retrieve you connection string using CloudStorageAccount's Parse() method instead of the FromConfigurationSetting() method.  For example, replace:

        private static CloudStorageAccount storageAccount =
        CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

with:

        private static CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
            RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));

Azure "DataServiceQueryException was unhandled"

System.Data.Services.Client.DataServiceQueryException was unhandled 

Message -An error occurred while processing this request.
 
Inner Exception Message - One of the request inputs is not valid.


1)Don't forget that you need to create the tables programmatically if they don't already exist. Something like this:

            tableClient.CreateTableIfNotExist("Customer");

2)Insert before Querying a table:

I finally found out that when using local storage, you must insert a record into a table before querying it. Note that a record does not have to be in the table when you are actually querying it, but one has to have existed in the table at some point. Note that this ONLY applies to local storage, cloud storage does not have this limitation.

Azure sdk 1.3 "Role instances are taking longer than expected to start." bug still exists in 1.4 and I'll tell you why.

After installing the sdk 1.4 released today, it appears that only half the problem was fixed. Initially the following error could have been thrown due to many things, some of which include the following:

On Start Debugging the following errors always occurs:
The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state.
And
Role instances are taking longer than expected to start. Do you want to continue waiting.


1)Exit Both Azure Emulators, clean solution, and redeploy
2)If your project is under source control and your web.config is readonly, make it writable.
Or
Add this to your Post-build event command line in your project properties.
attrib –r $web.config    
…Case sensitive

3)Enable enableNativeCodeExecution in the ServiceDefinition.csdef
4)Made sure only one web role in solution.
5)Made sure only 1 instance count is set.
6)Double checked Copy Local on assemblies
7)Ensure WCF Activation is enabled
8)Run visual studio in administrative mode
9)After running, open iis, and make sure directory browsing is enabled
   Apply network service permissions to root folder which allows azure to dynamically enable directory browsing in iis on vs dev fabric start.
10).net full trust in iis
11)Checked cloud project  .net framework version
12)make sure your cloud project is the start up project
13)make sure your website properties has a start up page set
14) waiishost.exe 60second time out

All of which are easily configurable in order to be resolved, except for #14.

With the new full iis functionality, there is a change in architecture in the way we host the webrole dll.  When we run in the local development fabric,  waiishost.exe talks to the iisconfigurator process through a WCF named piped.  The iisconfigurator process gets the webrole running within iis.  When looking at his named pipe, it has a timeout of 60 seconds. 

The issue occurs because, only if running under the development fabric, then the ACLs are redone on the local resources to make sure IUSR has permissions.  This is called from waiishost and ran in iisconfigurator.  Unfortunately, with the amount of files in a given project, this may take longer than the timeout of 60 seconds and this is what leads to the communication exception.

Unfortunately, this 60 seconds timeout is hardcoded so it can only be resolved one way at the moment…
#14 Solution)
Delete the <Sites> section from ServiceDefinition.csdef and the emulator reverts to using the Hostable Web Core (HWC) rather than Full IIS, and the timeout is avoided.

With the new sdk everything is resolved/or configurable again except for #14. Although, a new error shows which helps provide validation that this is in fact the problem.

Error)

This request operation sent to net.pipe://localhost/iisconfigurator did not receive a reply within the configured timeout (00:01:00).  The time allotted to this operation may have been a portion of a longer timeout.  This may be because the service is still processing the operation or because the service was unable to send a reply message.  Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.

Referencing Bill's link in the comment's below, the following may be a feasible workaround. I have not yet tried it, but please feel free to try it out and let me know.

If you must need Full IIS Role and cannot delete the <Sites> sesiont, then you can use the following workaround:
#14 Solution)
a) Please copy all the files (or most of the files) in your virtual directories folder into ZIP files.
b) When the Role starts expand the ZIP file in OnStart() function
c)  IISConfigurator.exe runs before role start so having thousands of files in ZIP will not cause the total time to exceed 60 seconds, and there will be not exception
Hope this helps!