Saturday, April 11, 2009

Jts 3 Development Wiki

I found an excellent free WIKI from screwturn. I was up and running in about 10 minutes. Excellent.

Anyway, I created a wiki to document the Jts 3 development effort.

http://www.allardworksdev.com/Wiki

Tuesday, March 31, 2009

Jack’s First SQL Query

 

Jack is just about 6 months old now. As anticipated, he has expressed an interest in programming. He was sitting on my lap yesterday as I was tweaking a query. He started to pound on the keyboard, so I opened up a fresh window for him so that he could express his programming desires uninhibited by my existing work.

Here is what he came up with

“  ."

That’s 2 spaces, a period, and another space. Not only should you appreciate the query itself, but you should appreciate the manner in which it was written. Some people have a vague idea of what they are doing and they sit down to figure it out through a cycle of research/trial-and-error. Others know what has to be done and simply do it; Solving the problem takes exactly as long as it takes to type in the solution. I am of the latter classification, and it would seem that whichever gene enables that skill has been passed to the offspring. Jack was Picasso and the keyboard was his tapestry; There was no delay or thought, simply action. It was so natural that a casual observer may have perceived it as nothing more than the random flailing of his 2 topmost limbs.

As a dad I proudly exclaim that this is a tremendous victory. While not completely void of issues, it is an excellent first step in solving many well known SQL puzzles. I love the initiative, the attitude, and the overall spirit of the effort. It takes more than skill to be a programmer; you need to love it. For that he gets an A+.

But, as a software architect and a mentor, I must be fair and point out the very rare opportunities for improvement.

  • Excessive white spaces – surely we don’t need spaces on both sides of the period. Perhaps he should consider a tab rather than consecutive leading spaces. I was going to suggest this to him, but he chose that moment to engage in massive crap. He was concentrating very hard on the pushing exercise, with his brow furrowed and his face turning red from the effort. Relative to his intestinal action and diaper trauma, the tab issue seemed trivial so was left unsaid.
  • The period – in this context, it doesn’t actually do anything, which is OK. I see that the period is a solution; its just that we don’t understand the problem because we're dumb. I only mention it here because he didn’t comment it.

Cleary Jack is on the road to programming greatness.

Sunday, March 29, 2009

XQuery – How ye disappoint

 

I’m updating a legacy app from VB6 to ASP and then to .NET. The ASP is a transitional step so that I can stop dealing with COM+ objects.

Anyhoo… I picked one of the common pages. It prints a grid, basically.

  1. page calls a vbscript function
  2. the vbscript function executes a query and gets back a recordset
  3. a series of nested loops covert the recordset to xml
  4. the xml is returned to the page
  5. the page uses XSLT to convert the XML to HTML

Sweet.

Now that I’m converting it to .NET, though, I wanted to try exciting new possibilities. The application is on SQL 2000, but updating to 2005 is a reasonable expectation. (I won’t push my luck with 2008).

The Intent

The idea is to transform this data

image

into this xml

image

In the current app, that transformation is done in ASP VbScript.

 

SQL XML

I’ve dabbled with some of the XML capabilities in 2005. I’ve used it to join to tables and to shred the xml. I’ve also used it to create xml documents without that pesky !TAG! syntax. But, they were all meager efforts.

I started by hoping that such meagerness would be sufficient.

   1: declare @startDate datetime



   2: declare @endDate datetime



   3: declare @theaterId int



   4:  



   5: select



   6:     @startDate = '10/31/2003',



   7:     @endDate = '11/6/2003',



   8:     @theaterId = 170



   9:  



  10:     select 



  11:         v.FilmId "film/@film-id",



  12:         v.FilmName "film/@film-name",



  13:         v.PrintId "film/print/@print-id",



  14:         dates.[Date] "film/print/date/@date",



  15:         a.AuditoriumName "film/print/date/auditorium/@auditorium-name",



  16:         a.AuditoriumId "film/print/date/auditorium/@auditorium-id"



  17:     from 




But its not. That built the the hierarchy, but it repeats itself over and over. It doesn’t group itself the way I need. If there was/is a way to do everything I need by specifying the paths like that, then it would be a good day.



SQL XQUERY



Next, I started dabbling with XQUERY. My only XQUERY experience has been via SQL Server 2005, and in its simplest form.



I went Google-Crazy and read up on some stuff. I was able to write a query (albeit a crappy one) that does the job.





   1: select @output.query('



   2:     <theater>



   3:         {



   4:             for $filmId in distinct-values(/theater/film/@film-id)



   5:             return 



   6:             <film>



   7:                 { attribute id { $filmId }}



   8:                 { attribute name { /theater/film [@film-id = $filmId][1]/@film-name }}



   9:                 {



  10:                     for $printId in distinct-values(/theater/film [@film-id=$filmId]/@print-id)



  11:                         order by $printId



  12:                         return 



  13:                         <print>



  14:                             { attribute id { $printId }}



  15:                             {



  16:                                 for $date in distinct-values(/theater/film [@film-id=$filmId and @print-id=$printId]/@date)



  17:                                     order by $date



  18:                                     return 



  19:                                     <date>



  20:                                         {attribute date { $date }}



  21:                                         {



  22:                                             for $auditoriumId in distinct-values(/theater/film [@film-id=$filmId and @print-id = $printId and @date=$date]/@auditorium-id)



  23:                                             return



  24:                                             <auditorium>



  25:                                                 { attribute id { $auditoriumId }}



  26:                                                 { attribute name { /theater/film [@film-id=$filmId and @print-id = $printId and @date=$date and @auditorium-id=$auditoriumId]/@auditorium-name }}



  27:                                             </auditorium>



  28:                                         }                                            



  29:                                     </date>



  30:                             }



  31:                         </print>



  32:                 }



  33:             </film>



  34:         }



  35:     </theater>



  36: ')




How does this suck? Let me count the ways




  1. 4 levels of nesting. Its not pretty. But, the VbScript has the same layers. (The logic is different, but its just as nested)


  2. Each layer has to go back to the top and work its way back down based on the key information collected thus far


  3. In the loops, I can only order by the loop indexer. For example: Auditorium. I don’t want to sort on “auditorium id”. I want to sort on display order. I can’t, because “auditorium id” is a value, not a node. If it was a node, I’d be able to get to a sibling attribute.


  4. It offers a handy distinct-values, but does not offer a handy distinct-nodes. (there are example how to do distinct-nodes, but the few I’ve seen use the LET statement, which you can’t do in SQL 2005)



What doesn’t suck



Obviously I’m having problems with it, but that may just be due to my staggering 45 minutes of inexperience with it.




  1. I like the syntax of specifying the attributes (shown) and elements (not shown) through the {} syntax


  2. I like that the comments are smiley faces (not shown).  (: this is an xquery comment :)


  3. In principle, I like how you can do the layering.



The Problem



I got the XML that I want, but its slow.  The SQL XQUERY consists of 2 parts: the query to get the data as xml, and the xquery to transform it the way I’d like it.



The first part comes back instantaneously. The 2nd part takes anywhere from 2 to 16 seconds. One time, it took a minute and 54 seconds?!. Its really inconsistent. I looked at the execution plan multiple times. Every time it says that the first query accounts for 0% of the time, and the 2nd query accounts for 100% of the time.



The legacy app does all it needs to do, including rendering it on the page, in a 1/2 second or less. You don’t even see it happen; you just click the link and the page renders.



I know that my xquery is amateur. If I can rewrite it the way it should be written and try again, maybe the results will be drastically improved. (At least I hope they are.)



Things that Would Help




  • SQL 2008 supports the LET statement. If I had that in 2005, then I could assign node sets at the various levels, and treat that as the root for that level. Then it wouldn’t have to go to the top of the document every time. (At least, it seems like I’d be able to do that)


  • If I could do a distinct-nodes instead distinct-values, then as I loop through, I can get the other stuff I need relative to the attribute. IE: $film-id/../@film-name.


  • Knowledge of XQuery would sure be helpful.



Next Steps / Conclusions



I wanted the source doc to be hierarchical so that its would be an accurate representation of the data. Since the XQuery didn’t work, I may end up doing it in C#. Then, the page will use an XSLT to render it. (I’ll look into using XQUERY to render it, but I don’t think that’s a viable option yet).



I developed the original application starting in 2001. Over the first few years, I spent a lot of time performance testing the quickest way to get the data out of the database and onto a page. I always lean towards XML and XSLT so that you can easily render it different ways. I want to keep it transformable.



Of all the things I tried, the quickest thing has always been:




  1. Run the query and get back a flat dataset


  2. Use code to convert the dataset to xml



Despite the repeating data, and despite the manual conversion, it wins every time.



Things I may try




  1. Convert the SQL XML to my XML via XSLT


  2. Convert the SQL XML to my XML via C# code (the old fashioned way with a new language)


  3. Read more about XQuery to determine how off-target my query really is

Friday, March 27, 2009

Adventures in SCM

After a long hiatus, I am resuming work on JTS. JTS is a theater management system that <a href=’http://muvico.com’>Muvico</a> has been using since 2001ish.

2001. That was years ago. Those were the days of ASP and COM+. .NET was still called ASP+. I was still prefixing everything I did with the letter J.

The application now spans all generations of development technologies from ASP to .NET 3.5 SP1. Swell.

The first big effort is to downgrade all of the COM+ to ASP, because ASP is easier to work with these days. The second big effort will be to selectively convert the modules to the IN-DEVELOPMENT JTS 3 API.

I started researching what I can use for SCM. Ideally, I want it to be an online repository so that it doubles as backup service. (I use MOZY, but the client is dreadfully bad. Its almost unusable at this point. They said they’re rewriting it, but we’ll see.)

My googling lead me here http://www.myversioncontrol.com

For $5/month, I get what I need. Its subversion based, which I never used before. The service is excellent and its very generously priced. I have a security concern about it, though. I sent an email and we’ll see what happens. (it never asks for an encryption key. I’d like to keep their eyes out of my code.)

MyVersionControl recommends 2 SVN clients: RAPID SVD and TORTOISE SVN.

I couldn’t get RAPID to do anything. I got a lot of exceptions. I then tried TORTOISE which is a set of windows explorer shell extensions. That worked out pretty well. It took a little getting used to, though. My SCM exposure has been limited to VSS and StarTeam. This subversion stuff is different.

Finally, I tackled VS integration, which ended up being an easy task. I found this product:  http://visualvsn.com. I installed it and started using it. Peace of cake. VisualVSN, like MyVersionControl, offers a 30 day trials.

Everything went well. I spent some time adding some filters to weed out the files that I don’t need to control. I checked everything in. My repository is now 50% full. Now I have 30 days to see if this sense of joy is permanent or fleeting.

Monday, March 2, 2009

Getting back on the ball with the DvdFriend, RSS feeds etc.

 

Today, I decided that I wanted to get my RSS feeds under control. I’m not very RSS saavy… I just use internet explorer to subscribe to them; nothing fancy. I ventured out to find a good web solution. I came across this little startup called “Google” which has, among other things, a decent reader.

I haven’t used any other readers, so when I say “decent”, its not relative to anything else. Its very functional, like Gmail. And, the user interface isn’t great, like Gmail. But, its pretty neat.

I have to rebuild a list of RSS feeds. If you come across this post, please let me know of your own personal feeds, and any others that you recommend. Please don’t assume that I already have it, even if I should.

Also, I downloaded and am currently using WINDOWS LIVE WRITER. Good stuff. This will greatly improve the readability for people who take offense to typos. (I’m thinking of someone particular. You know who you are.)

In a related story: I spent today getting a lot of stuff organized. I have multiple drives with duplicate code, documents, databases, etc. I’ve sorted through most of that. I also have some VMs:

1 – super secret side project that I dumped because it wasn’t respecting my time

2 – JTS

3 – Other development efforts, including DvdFriend

I’m getting all of those ducks in a line. The VM for #3 exists, but I haven’t setup the DvdFriend stuff yet. I’d like to get that going; I haven’t touched DvdFriend in months, and I’m itching to do some stuff. I started putting some prices and links in this week; Amazon and Netflix still work. Everyone else has changed their html, so the parser isn’t working. Oh well. 2 is better than none.

The last time I worked on DvdFriend, I made progress towards asking TV reviews at the SHOW, SEASON and EPISODE levels. Of course, that was months ago and I have no idea where I left off or how I did it. Lets hope that I can read my own code.

Saturday, February 7, 2009

There's nothing wrong with a test hitting the database

Let me preface this rant with a preface. I work at a company that's all about agile, TDD, Scrum, and other cool words. Personally, I am all for those things. On the rare occasions that I have code unaccompanied by tests, I feel guilty. There should always be tests.

But, that's the extent of my opinion of it. Write tests that test the code you're about to write. I'm not into mock objects. I don't debate the validity or non-validity of any one approach vs any other approach. I do what I have to do to produce a test that proves the code works. I really try to keep it simple. If I need a dummy implementation of an interface to prove something, then I spend 6 seconds to write the implementation.

But, being in a company where there are lots of people with much stronger opinions about it, I hear a lot of stuff. When is a unit test no longer a unit test but a functional test? Should unit tests be allowed to hit the database? What should tests do and not do? Yadda yadda. I do not doubt the importance of those conversations or the ramifications of the results, its just not something I participate in. I'm more about the code and proving the code works; not the philosophy or implementation behind it.

One of thoe things that comes up quite a bit is "the tests should not hit the database". My response to most things is "well, it depends on the test". If you're writing tests that are implicity hitting the database, then sure, in that case the database component should be swapped out with something simpler and faster without the environmental requirements. Yippee.

But, sooner or later, you come down to the object that actually does the writes to and/or reads from the database. I'm sure you can emulate it, but if the object is a db object, then I'm of the opinion that you should make sure it reads and writes to/from the db. I don't know where that opinion stands in the overall view of the agile/tdd community, but I have heard blanket statements that tests should not hit the database.

Last night, I got a call after hours asking me to look at some tests that were failing. I immediately stated it was environmental since the tests were 2 years old and hadn't been touched in 6 months, and then I set out to prove it. The cause of the failure was a missing row of "system delivered data" from the database that was there prior to the related project, and should always be there.

If my test mocked the db activity rather than run it against the real scenarios, then we wouldn't have learned that the data was gone until someone fired up the product for real and tried to use it. The missing row was an adverse affect of a major database effort of another team. I wasn't involved with the fix, but as soon as it was identified they had no problem fixing it, so it seems to have been minor.

So is it a functional test or is it a unit test? I don't know, and it doesn't matter to me. I wrote a test to prove that the code works, and as soon as an environmental dependency vanished, the test failed. That's the imporant part.

Maybe there should've been a fitenesse test or some other automated test that would've tested the functionality within the website. Maybe that test does exist and we just didn't get to it yet. That's possible. But, it never got that far. They did the build and they ran the mbunit tests, and we immediately knew there was a problem.

There is one take away from this: As a developer, the exception message allowed me to quickly identify what the problem was. But, it was implicit; I mentally traced it to the actual cause. My take away is to proactively check for this condition and throw an explicit error message.

To those who say "your tests shouldn't hit the db", I say nay. Maybe I won't get the Agile Developer of the Year award, or maybe some in the TDD community will frown upon me, but the db hitting test identified a problem.

We (my team) have lots of tests that hit the db. This one is a bit different since its know system delivered data, but most of other tests are not. In those cases, they insert all the test data they need, run the tests, then clean all the test data. We achieve this, in part, by not using identity fields on our setup tables. All our test data gets inserted with ids > 10,000,000. We have a db test harness that's a facade for all of the things we need to do. The finally of every test calls a method that clears out all of the test data. All of these tests are flagged as "long running"; we run them locally, but not as part of the nightly build. In fact, we have a unit test that stress tests certain procs for thread safety.

That concludes tonight's rant.

Saturday, November 1, 2008

AZURE update

Greetings

I've only seen my very simple project work twice. The first time was when I first built it; the 2nd time was some random success. Typically, it just times out after 3 minutes.

I'm setting this aside from now. This has inspired me on a project I've often talked about doing but never took on. So, I'm working on that now. I tend to start a lot of stuff and never finish it. This may be such a project, but at least for the moment, I'm motivated, and I'm working hard on it.

I'm going to use SQLCE as the default data store for the project. I haven't used it before, so it'll be neat. (Of couse, you can swap it out with any data store you want, but it'll be ready to run out of the box because of CE)

Friday, October 31, 2008

Resharper: "Use implicitly typed local variable declaration"

I've been using Reshaper for a couple weeks now. I like it a lot. When I was given the option to get a license for this, I responded that I would rather Code Rush. Afterall, I write more code than I refactor.

That ended up being an over simplification, though. Resharper has lots of great goodies in it. Though I would still like Code Rush, Reshaper is great all by itself. I learn to appreciate it more any day.

Except for the "Use implicitly typed local variable declaration" hint.

As an example, I have this line of code:

ServiceDescription description = attribute as ServiceDescription;

the type, ServiceDescription, is underlined with the fore-mentioned hint. Basically, its telling me to define the type as VAR and let the compiler figure it out for me.

I'm not a fan of that suggestion at all. If I know what type it is, then I want to specify the type. I don't need the compiler to figure it out for me. If I end up specifying a less than optimistic type (ie: should've used XmlReader instead of XmlTextReader), then Resharper or FxCop will let me know, and I'll learn from my mistake rather than just let the compiler do its voodoo for me.

However, I don't want to be irrational and just blindly shut off the hint. I wanted to find the justification for that hint, so I started poking around. It seems that there are 2 prevailing schools of though on this: Those that think you should use it for everything, and those that think you should only use it for anonymous types.

After reading a few different things, I have committed to my opinion expressed above: If you don't know what type it will be (because its anonymous), the use var. Otherwise, specify the type.

A guy from resharper justifies it here:

http://resharper.blogspot.com/2008/03/varification-using-implicitly-typed.html

While interesting, it doesn't sell me. Some comments on some bullets:
- Its is required to expres variables of anonymous type - no kidding. that's why it was invented.
- It induces better naming for local variables - that's putting a square peg in a circle hole. Its handholding at best. If you name you're variable CURRENT then its scope should be so small as to always remain obvious what it is. If it isn't obvious, then you named it wrong, and declaring it of type var isn't going to make you name it any better.
- It induces variable initialization. - Again, I don't need VAR to force me to do that.
- It removes code noise. - Maybe. I'd like to see some samples before I buy it.
- It doesn't require a using directive - so what? Are using directives troublesome to anyone? Heck, Resharper puts it in for you. If you don't have resharper, then CONTROL+. will put it in for you.

That's my story.

Thursday, October 30, 2008

Windows Azure - Cool

Greetings

By recommendation of a co-worker who is at PDC, I immediately signed up for AZURE and download all of the associated files. I also had to update this machine to 3.5 SP1 and VS2008 SP1.

The following is just running commentary on what I'm doing as I do it.

Once everything was in, I opened up VS2008 and found the new CLOUD SERVICE project options. I can only guess what a worker is, so kept it simple and started with just a simple WEB CLOUD SERVICE.

This creates 2 projects: The service itself and a webrole. Again, I can only speculate on how we'll use WEB ROLE based on the name. The service project has a ROLES folder with a refernece to the ROLE project. Swell.

The role project looks like a website. It has a web.config, Default.aspx, Default.aspx.cs, and Default.aspx.designer.cs. (I think that last file is new too. If its been in 2008, then I never noticed it. Don't know if its an AZURE thing or a SP1 thing.) The service file has two configuration files: a csdef and a cscfg. One defines the service, the other configures it. The cscfg defines the endpoint name, port and protocol. It notes that the port must be 80 in the actual cloud environment, though you can use whatever you want in the dev environment.

I didn't make any changes since I really have no idea what I'm doing yet. I hit the RUN button to see what happened.

The status bar reports "Initializaing Local Development Service", or something like that. VS seems to hang for a while, then reports that it can't find .\SQLEXPRESS. That's fine. I don't have SQLEXPRESS. Then it hangs some more, and eventually says that the service timed out.

For kicks, I started downloading sqlexpress 2008. I haven't used 2008 yet, so now's as good a time as any. In the meantime, though, there must be a way to switch the database info.

I found the answer in C:\Program Files\Windows Azure SDK\v1.0\bin\DevelopmentStorage.exe.config. I changed the setting, then hit run in VS2008 again. It reports that DEVELOPMENT STORAGE IS ALREADY RUNNING. ONLY ONE INSTANCE OF THE APPLICATION CAN BE RUN AT THE SAME TIME. Then it hangs again, and eventually comes back the SERVICE TIME OUT ERROR.

I looked in Task Manager/Processes, and services.msc for any sign of this thing. No luck. That's not to say its not there, but I didn't see it short of looking at each process individually. I checked for things like AZURE and DEVELOPMENT, etc. (In retrospect, I should have looked through the vs2008 menus and icons. There's probably something there)

Oh well. Time to restart vs2008. This time, when clicking run, it asks me if its ok to do some initialization as an administrator. Fine with me; go for it.

It reports this:
Added reservation for 'http://127.0.0.1:10000/' for user account 'jayavst690\jaya'
Added reservation for 'http://127.0.0.1:10001/' for user account 'jayavst690\jaya'
Added reservation for 'http://127.0.0.1:10002/' for user account 'jayavst690\jaya'

Checking if database 'DevelopmentStorageDb' exists on server '.\personal'
Creating database DevelopmentStorageDb

Granting database access to user 'jayavst690\jaya'
The login already has an account under a different user name.
Changed database context to 'DevelopmentStorageDb'.
Adding database role for user jayavst690\jaya
User or role 'jaya' does not exist in this database.
Changed database context to 'DevelopmentStorageDb'.

Initialization successful. The development storage is now ready for use.

Now, there's a DEVELOPMENT STORAGE icon in my system tray. Was that there before? Didn't notice. This time, it gave me a baloon to let me know it was up and running. There wasn't one when it failed.

The development storage app shows that there are 3 services: Blob, Queue, Table. Table is stopped, the other 2 are running. The menu bar doesn't gives us a whole lot to do.

I closed it and ran it again. This time, vs2008 hung. It gave me the "vs2008 is waiting for an internal process" message. Swell. I see that in sql server management studio 2005 all the time (usually when working with diagrams), but this is the first time for vs2008.

I got tired of waiting, so killed it from the task manager. While in there, guess what I noticed: DevelopmentStorage.exe. That definitely wasn't there when I checked earlier.

I restarted vs2008, and ran the project again. Development Storage started, but vs2008 is hanging again.

So far: Lots of hanging.

Now I'm just trying to get it to do something. I read a blog entry that tells me WEB ROLE isn't what I thought it would be (I was guessing just based on the word role). My goal is to get a silly DvdFriend service going. I'll keep you posted.

Thursday, July 31, 2008

Related Products

I added some support for "Related Products". If you select Dark Knight, for example, the product page will also show you Batman Begins.

This involved three new tables:
- ProductGroupType
- ProductGroup
- ProductGroupMembers

Two new views:
- vwProductGroups
- vwProductGroupMembers

Stored Procedure:
- GetRelatedProducts

I added a new object data source to the page, and a gridview. The ODS calls a method that calls the stored procedure and returns a datatable (keep it simple).

Currently, it just lists them with links. That will improve. Also, a product may be associated to multiple groups. The page will have to improve to show the different groups. For now, they're all just merged into one distinct list.

I'm not going to make an effort to backfill groups. But, as new movies come up, I'll create new groups as appropriate. So far, there are groups for Stargate, Lost Boys, Hellboy, and Batman.

Wednesday, July 30, 2008

MVC Preview 4

Preview 4 came out yesterdayish (at least, that's when I first heard of it). I haven't updated my test site yet, but will. Tonight I got hung up writing some queries for my side-venture. I have another task to do for them, then I'll be back for MVC Preview 4, and new work on the DvdFriend site. (Next up, I think, is product level comments. Lets rants and speculate about movies and DVDs without actually seeing them!)

Monday, July 28, 2008

Added News and Feed

I added a NEWS section to the top of the main page. It will really be more than news, though... more like notices, perhaps.

The News support was already in there. It was displayed on the page somewhere about a year ago, but it didn't look good or wasn't useful, so I got rid of it.

Fortunately, the method to retrieve it is still there. Basically, I just call GetRecentEntries(5, "news", true).
Last 5 entries; news zone; get the entire text rather than just the title.

There's a news feed too, but its going to need work. The title of a blog entry may contain HTML (ie: the link to dark city), but a syndication title may not. I have to change the entry page to distinguish between the title and what the title links to (if anything). In the meantime, it tears the html out of the title if there is any. Then, for the permalink, it extracts the link. If the link it exists, it uses it. Otherwise, it just links to the main page.

Sunday, July 27, 2008

Web delay fixed

The production site, since moving to GoDaddy, has always taken a few seconds to load on the first hit. The problem is that the first hits are very frequent. Almost every time I go to the site, its a first hit.

It never really bothered me much because there are only a few visitors a day, but now that the embedding is enabled, its more intrusive. The hamletcode blog would pause as it waited for the dvdfriend site to fire up to serve the demo embedded review.

I poked around in the pool settings and found that the worker process was set to shut down after 5 minutes of being idle. I disabled that. I also saw that it was set to recycle ever 29 hours, so got rid of that. There's really no reason for the worker process to need to be recycled, but this will be a good test to make sure.

Since I was in there anyway, I changed the session timeout from 20 to 60 minutes. Effectively, the timeout was 5 minutes anyway since there's rarely more than one person writing a review at any given time.

This should resolve all the timing issues. Also, it will help coverup a bug with the review page; If session times out while you're writing the review, you lose it. That's definitely very bogus and I have no reason to avoid fixing it other than a complete lack of interest. In fact, that's the last thing that's stopping me from removing the "under construction" label.


In conclusion

Reviews can now be embedded

I was going to make this feature available to only people that are logged in, but then realized that it probably doesn't make sense to limit my exposure. Embed away!




The code goodies follow. ReviewHtml.GetReview() simply builds a bunch of javacript document.write() methods.

[ServiceContract]
public interface IContentService
{
[OperationContract]
[WebGet(UriTemplate = "review/{reviewId}", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat=WebMessageFormat.Xml)]
Stream GetReview(string reviewId);
}

public class ContentService : IContentService
{
public Stream GetReview(string reviewId)
{
return new MemoryStream(Encoding.UTF8.GetBytes(ReviewHtml.GetReview(new Guid(reviewId))));
}
}

Here are some interesting things I learned:
- I had to return the text as a stream rather than as text. If you return it as just a string, then there's always sometype of serialization wrapper around it. Additionaly, the generated html tags get encoded. So, we get <td> instead .

- In the UriTemplate, you specify the parameters.

UriTemplate = "review/{reviewId}"

Its maps the value in {reviewId} to the reviewId parameter of the method. That's cool; very MVCish. Unlike MVC, however, it must be a string. In this case, the ID is a guid, but the mthod must accept a string.

I find this dissapointing and, if I had to guess, I'd say that will change. The framework is certainly capable of converting known types for us.


I really just sort of tripped through the WCF stuff in this case. I have to read up on the new REST capabilities.

Wednesday, July 23, 2008

Good bye faithful printer.... Good bye!

I have mixed feelings about the passing of my printer.

Well, see... there. I've lied already. In the very first sentence I'm lying like a president. The printer isn't so much "passing" as it being put down.

I have a Lexmark X125 all in one printer. It is AT LEAST 5 years old; possibly even more than 6, but I have definite milestones to peg it at least 5.

Its a cool little printer that I got for cheap and has lasted, to some extent, all this time. It still works. In fact, shortly after I bought it, I bought one for my parents too. (Coincidentally, last night I received a call about a new printer they bought. I believe that they too were still using their X125 until recently, but unconfirmed.)

So why the mixed feelings? On one hand, I bought a cheap printer and used it for 5 years. On the other hand, it annoyed me every time.

Here's the thing: the drivers suck. They always have. Lexmark support was 0 help... I gave up years ago. The printer will work for a while, then you have to kill some processes in order to get it to print again. I exchanged many emails with them just trying to get them to come clean and say "sorry, we suck", but they wouldn't.

The next logical question may be, "How did you put up with that for 5 years, you poor poor soul!?". The answer is simple: I don't do a heck of a lot of printing. I bought a box of paper years ago and I still have most of the reems (spelling?). I just don't have a need or a desire to print. If I want to read something from the computer, I just read it on the computer. It doesn't have to be paper. I keep all our digital images as digital images... I don't print them.

Lately, I've had a need to print some stuff, for expense reports, on a monthly basis. My luck has been limited. I occasionaly go to the office with the intent of printing the stuff, but then it slips my mind and I leave empty handed. Ick. I also have to fax my receipts, an the X125 was being difficult. It would often say "replace cartridge" even though it was a new cartridge. After multiple attempts, I thought it was dead. Then, it finally worked.

So now, at long last, we're at the point where it is essentially unusable. I want to be one of the cool kids that simply clicks "print" and the thing prints. Is that too much to ask? Am I being a snob by not wanting to fight to print for 5 minutes a page? I don't think so, but I value your opinion, so let me know.

Due to the fact that I work from home, my esteemed employer found it in its heart (and wallet) to equip me with a brand new HP5610. I plugged it in and clicked print. Guess what... it printed! Now I feel like a king. Rejoice!

I'll cart the ole X125 up to the recycling center on my next trip. When I dump it in the box, perhaps I will pause for just a moment and reflect on the times gone by, both good and bad, but I make no promises.

Coming Soon - Embedded Reviews

Chris has stated his desire to be able to embed his DvdFriend reviews elsewhere in the internet galaxy. Interesting idea. I haven't done that before. (For those of you just joining us, I'm not real big on building web pages. There are a lot of things I haven't done.)

Anyway, I started looking into this. The first thing that jumped to mind was left over from 1995: Add an iframe. The second thing to jump to mind was: don't be ridiculous.

I started searching on the current swell ways to do this. The goal, per normal, is keep it simple. I just want the user to drop a little piece of something on their page and have it work. I came across XSS pretty quick, then avoided it since XSS is often associated with bad mojo due to attacks. I looked at the object
tag... Couldn't get it to work with external web pages.

Eventually, I ended up back to XSS.



Created A Page called ReviewScript.aspx
I wiped out everything from the ASPX except for the server tags at the top.

The PageLoad calls Response.Clear(). It then builds a big piece of javascript that, basically, generates some html then writes it to the document.



Add a script tag to the page that references ReviewScript.aspx
Every time I paste any type of tag into this stupid thing, it loses it. And I'm currently too lazy to deal with a screen shot. So, mentally fill in angle brackets.


script language='javascript' type='text/javascript' src='http://www.dvdfriend.us/ReviewScript.aspx?id=xxx'
/script


Sweet. I started by including that on the dvdfriend main page (dev version) so that I can compare whats generated to what shows up on the page.


Templates / Make it look as it does on the main page

The generated html is based on a template. The default template is going to look exactly like a rewiew does on the main page. I started by embedding the script on the main page so that I could look at them next to each other. Once it was close, I moved it to another site altogether.



Create a new CSS

As soon as I imported the dvdfriend.css to the other site, it messed up the entire page. That was expected. I created a new css called external.css and copied over only the styles i needed. I renamed them all with a prefix of DF, just to keep them separated.


Incidentally, the css is included by document.writing a link tag.


That pretty much did it.

Template - So Far

The template has these tokens so far:
DvdFriendCss
Rating
Title
ProductTypeImage
ProductName
ProductId
CreateDate
Author - pending. Have to populate this
RatingClass
AuthorLink
ProductLink
ReadLink



The list will grow. Most of them are just pieces of data so that you can build it anyway you want. Some of them are more generic to give you something to start with. the LINK tokens, for example, automatically create the links as you see them on the home page now.



TODO

- See if there is a better way to include the CSS. If there are multiple embeds on the same page, it will import the css multiple times. Would rather do it through javascript.


- Retrieve the author. The page is built from a datatable. The script is built from a blog object which, mysteriously, doesn't already have an AUTHOR property exposed.


- Work out some additional css issues. It almost looks like it does on the site, but I still have some font issues to resolve.


- Test in production environment. I've only used it on my local machine. Lets see if it actually works out there. Furthermore, lets see what types of things, if any, prevent the xss from firing.


- LATER: Allow for users to create their own templates using the available tokens. I think that every template will be available to every user, but it can only be editted by the person who created it. We'll see.

- Brush my teeth and go to bed

- Server it as a WCF service rather than an aspx page. The aspx is a quick and dirty just to get it going. I will convert it to a WCF service much like the RSS feed. The one missing piece of info there is how to pass parameters. It shouldn't be a big deal; I just have to look into it.

Screenshot

Here's what it looks like embedded on the now neglected Clan Friend site.

The background of the review is always white. Its not inheriting it from the parent element.

Sunday, July 20, 2008

First RSS Feed is live

I deployed the first DvdFriend RSS feed. I kept it as a WCF service afterall. I'll keep it that way until i come up with a reason not to.

The deployment wasn't painless. It didn't work in production as it did in development. It was because there are multiple sites on the server that are distinguished by host headers. Ilearned in 3.0, you had to code around it. 3.5 makes it a little easier. I found this link:

http://blogs.msdn.com/rampo/archive/2008/02/11/how-can-wcf-support-multiple-iis-binding-specified-per-site.aspx

I added http://www.dvdfriend.us as a prefix. I attempted to add http://dvdfriend.us as a second prefix, but it went back to the orignal error. Go figure. I don't havea good grasp on this stuff yet, but its working, so we're ok for now

I think COMMENTS will be the next feed. I have to create a page for that anyway, so they'll have the same data source.

RSS 2 / Atom 1 Feeds

.NET 3.5 adds a WebHttp WCF binding and Synidication support. I haven't tried anything with syndication yet. Now seemed like a good time.

The DvdFriend home page shows the most recent 50 ratings/reviews. I've received more than one request to make that, and comments, available as feeds. I started with the recent activity.


Setting it up as a WCF service was pretty easy, though I'll probably end up dropping it. I'll probably just expose it as SYNDICATION.aspx or something. We'll see. Regardless, doing it as a WCF service was a good experience.

Once I had a test feed going, it was time to populate it with the actual data. I thought LINQ would be the way to go. I already have a static method that returns a list of the recent reviews as a data table. I thought I'd just write a linq query against that data table. No dice. LINQ doesn't work on data tables. Swell. (Well, not really).

I wrote a generic wrapper class to make things enumerable for linq. (Is this the best solution? I don't know. But it works.)



NOTE: Changed this code. See notes at the bottom. (Changed GetEnumerator to return _items.GetEnumerator rather than yield through it itself)

Next, I had to write a Linq query that would return a List. In the process, I had to explore some of the properties and methods to see what was what. I set the basic stuff, but there were 2 things that I wanted to set, but couldn't during initialization:

- The author. SyndicationItem.Authors is a List(), so you don't initialize it. You have to add to the existing list.

- The item link. At first, I figured this would be BASEURI, but it wasn't. You have to call item.AddPermaLink(new Uri("...")). Its a method; can't call it during initialization.

For author, I could loop through all the items after the initial query and update the individual items from the datatable. But, not interested. That didn't sound like a very good solution. I only want to hit the datatable once and be done with it. PermLink would require a loop too, but that's based on ID which is already a property of SyndicationItem, so no problem there.

I solved the author issue by creating a subclass of SynidcationItem called AllardWorksSyndicationItem. I added a property called AUTHOR, so that I can save the information during load. Then, I can loop through and update the AUTHORS collection without having to bother the data table.



The Linq query now returns a List. After the query, I loop through the items and create the permalink (based on the ID, which I already have), and I add the value of the new AUTHOR property to the existing AUTHORS property.




But wait. There's more. The SyndicationFeed object has a property called Items. You set it to a IEnumerable of SyndicationObject, not IEnumerable of AllardWorksSyndicationItem. The list needs to be converted. I achieved this by another Linq query which does the cast


Then, you return a formatter for the type of feed (I chose atom), and that's a wrap for tonight.

Final Code


Rendered in IE


Coming soon!
UPDATES
I revisited some of this.
I realized that LinqWrapper.GetEnumerator was silly, because DataRowCollection is already IEnumerable. All you have to do it
return (IEnumerator)_items.GetEnumerator();
rather than yield through it yourself.
Then I was bothered by the fact that DataRowCollection is already IEnumerable, so why do I need this stupid wrapper class?
The reason is that DataRowCollection is IEnumerable. Linq requires IEnumerable, so the wrapper converts it. I tried finding other

Monday, July 14, 2008

Email Problems

Greetings

People on my mail server haven't received any email since 7/10/2008. That's a problem. I took a look at the server, and as far as I can see, its ok. I put in a request with support.

If you'd like, you can contact me at hamletcode@gmail.com in the meantime.

Monday, July 7, 2008

Prologue Complete

The final tally for the prologue draft is just over 9 pages. Chapter 1, so far, is about a third of a page. I expect that to get larger, even if I have to resort to increasing the font size.

Sunday, July 6, 2008

Lack of direction

I'm spinning my wheels trying to find something good to do. I watched a web cast on Dynamic Data, which was great. I'm going to start reading up on Ado.Net entitites. But, I really can't come up with a project that I'm interested enough in to really pursue.

I keep thinking about a pubsub project, but haven't been able to comit. I've worked on the DVD site here and there, but nothing ground breaking. I'm lacking a sense of purpose.

Lacking anything useful to do, I've spent some time trying to write a sci-fi story. I used to write a lot more when I was younger. Now, I usually find it more rewarding to code. Since that hasn't been working out lately, I'll give writing another shot.

I've been thinking about this story for quite a while. So far, its called Savior, but I just made that up a minute ago and it probably won't stick. The prologue is mostly complete, and I should start on Chapter 1 tomorrow. It'll end up being a pretty short story; A lot of stories take 5 pages to say hello. In mine, I just say "hello". I don't know enough words to drag it out much longer than that. It certainly won't be novel length.

Wednesday, June 18, 2008

Indecision results in Netflix

For the last couple weeks, I've been torn about what to work on.

I got bored with the clan site. Once I worked through all of the MVC.NET stuff, I lost interest.

Last week, I went to a SOA / ESB conference. For several days, I thought about writing a .NET pubsub system.

I've also been thinking about writing a validation component. Every one I've seen (not many) requires tight coupling between the validator and the object being validated.

I've also been thinking about JTS, which is a pretty big piece of software that I wrote. (Muvico uses it). I started rewriting part of it in .net 3.5 with the intention of, perhaps, renting it out to other theater chains. I still might do that.

Then, I came back to DvdFriend. When all else fails, work on DvdFriend.

I added NETFLIX to the list of vendors. Unfortunately, it didn't play nicely with the existing vendor framework. Netflix requires you to be logged in in order to see the full product page. I had to put in a piece of custom code to handle that.

- changed the scraper to use httpwebrequest instead of webclient

- added a netflix hack to handle the netflix specific stuff. (The next time this happens, I'll have to refactor to an OOP friendly solution. This is just one, though, so no pattern yet. My goal was to implement netflix, not write tons of new code.

- Added a RentalOnly bit field to the vendor table.

- Changed the user control to show RENT when its a rental product.

- Much code is ignoring prices where the price <= 0. Rather than deal with that, set netflix to set the price to 999.00 (which never shows up anyway). Added a FIXED method to the parser to support this.

- Changed the save price stored procedure, for admin convenience. Normally, you would select a product type of RENTAL, then search for the product. Since DVD and BluRay is already there, the new check will make it a rental regardless of the selected product type.

I don't plan on backfilling all of the products, though I did go through most of products in the left and center columns. I'll probably do the right as well. Going forward, I'll keep up with it, and maybe do a few others here and there, but it won't be a big effort.

Now that the netflix product id is collected, the next logical step is "ADD TO QUEUE", and related.

After that... who knows. Amazon Unboxed? (Unlike netflix, I think Amazon Unboxed probably has a commission.

As always, if you would like to support the site, please consider using it for your product purchases.

Tuesday, June 10, 2008

DOOM 4

It takes a while for me to get to the point. Hang in there.

Chris mentioned that "Hitman is the best video game movie to date" on the DVDFriend website. I countered by suggesting that Doom, Mortal Kombat, and all 3 Resident Evil movies, and possibly the first Tomb Raider movie were all better. He agreed with a lot of that, but disagreed on doom. (It seems that he was zoning in on a particular director rather than cover the full spectrum of video game movies).

I liked the first Doom movie. It's in the "crappy but good" category. I love how they didn't bother with a PG13... they just went for the R, and did anything they want.

Anyway, that got me thinking about the Doom games. I never beat Doom 3. I suppose I should play it again, but I didn't have fun with it. Doom 1 and 2 were in big open areas with lots of monsters. You could kill a lot of the monsters with a single shot.

I didn't get very far in Doom 3. It seemed that you had to shoot everything 50 times, and you were always in these tightly enclosed areas. I think I read that there could only be 3 baddies on the screen at a time. And, the infamous flashlight.... apparently your first-person self can't hold a flashlight and a gun at the same time. (There was change for that after... don't know if it was a 3rd party mod or a IDSoftware change). Half Life 2 came out about the same time. Its a FAR superior game. I later update to Doom 3 ripped off the Half Life 2 gravity gun.

Great... right. But what does this have to do with a tech'ish blog? I'm getting there.

After pondering the failure of Doom 3 for a bit, I jumped on google to find Doom 4. I learned that, at the beginning of May, IDSoftware announced that production on Doom 4 has begun.

"Swell", I thought (in first person). "it'd be great if I could post a link to that on the website".

The development project name for DVDFriend is "DVDBlog". So is the database. I approached the new design (which really isn't new anymore) in a completely different way. Everything is "blog" entries, which is to day, just a bunch of text. There's no real hierarch of data. There are other tables to bring the data together in meaningful ways, with more to come eventually.)

The DVDFriend site is broken up into zones. I can specify different things go into different zones, though I have never actually done that. Its functionality that I have to revisit, because the database and API both support my wish to show a list of NEWS links in the right margin. However, the admin site does not. Rather than jump in and manually insert the data, I'd rather write about not doing it, as I am now.

Actually, though, it would require a new zone. The right zone is currently occupied by the DVD list. I'll have to put it outside the margin. It'll give it a certain symmetry.

When will I actually get to this? I feel that simply acknowledging it is sufficient. I don't know when I'll actually end up doing it. The way this usually works is that, as I type this, I'm really not in the mood. But, it will fester, then I'll finally commit to it while watching tv one night. Will the patter repeat? We'll see.

In the meantime, I'm thinking about other things. In particular, I'm thinking about SOA, ESB, and the possibility of writing a .NET pubsub. Stay tuned.

Sunday, June 8, 2008

Cardspace, Rounds 1 and 2

ROUND 1
One day last week, I figured it was time to learn some stuff about CardSpace. I read some articles and found some good startup guides. I learned about EV certificates.

I installed a test EV certificate on my local development machine, and created a secondary login page for DVDFriend. I put the cardspace object tag on the new page.

Everything went pretty smooth until that point. Then, I hit the button, and dissapointment ensued. The applet popped up and asked me for a card. Then there was flash, and it and the IE instance both froze. I went into process manager and killed ICARDAGT.exe and INFOCARD.EXE that freed it up. That happened a bunch of times.

The event log revealed that it was a WS TRUST failure. I didn't look into it anymore at the time, but it was dissapointing that it kept crashing rather than handle it gracefully. It was a bad first impression, but oh well, things go wrong. I put it off for another day.

ROUND 2

I didn't change anything. I just came back to it a few days later, and everything worked as advertised. I wonder if that WS error was due to a remote service being down somewhere. Its probably still in the event log, so maybe I'll revisit.

I'm going to be in a conference for the next 3 days, so I should have plenty of down time at night. Time permitting, I'll implement CardSpace as an alternative login to DvdFriend. Once in place, I can check CardSpace off my "things to do" list.