Friday, October 26, 2012

Implementing Drag and Drop from your application to Windows Explorer

Getting PowerBuilder to accept file drops from Windows Explorer is fairly straightforward.  Getting it to support drag and drop of files from your application to Windows Explorer is actually fairly straightforward as well, but not entirely obvious.  We're going to start looking at how to do with starting with the .Net target types (WPF and .Windows Form) and then look to see how we might use the same technique for Classic Win32 applications.

WPF

What makes it fairly straightforward in WPF is the Systems.Windows.DragDrop class, a stand alone class that takes care of all the tricky details for us.  All we need to do is create an instance of System.Windows.DataObject class, load it up with the file names that we want to send Windows Explorer, and then pass it along to the DoDragDrop method of the DragDrop class along with the drag source and an enumerated value from System.Windows.DragDropEffects to indicate what we're doing with the files (e.g., copy).

Create a WPF target in PowerBuilder.Net and a WPF window that it open.  Then drop a native WPF control (not a PowerBuilder control) onto the WPF window, perhaps a ListView control.  We're using a native rather than a PowerBuilder control because the DoDragDrop method doesn't recognize the PowerBuilder control as a potential drag source.  I've attempted to try to get it to recognize the InnerControl property of the PowerBuilder control, and the script compiles, but it doesn't seem to work.  If I figure that part out, I'll update this blog.

If you've got a clean install of PB.Net, you probably don't have native WPF controls on your toolbox yet.  Simply right click in the toolbox and select "Add Tab".  You might name the new tab something like "Native WPF Controls".  Then within that new tab right click and select "Add Items..."  A dialog will appear that shows you all the WPF controls that the operating system knows are installed.  Select a few appropriate ones, hit OK, and they'll be in your toolbar for future use.

You also need to add System.Windows.Forms to the referenced assemblies because we're going to use some classes from it.  To make things a bit simpler when we code, go to the Usings for the Window an add the following references:

     System.Windows
     System.Windows.Input
     System
     System.Collections.Specialized

That means we won't need keep adding that namespace info when we use classes from those namespaces.

Let's create an instance variable in the window now to keep track of when the user initially left clicked on the control as follows:

     Point start

Now let's find the PreviewMouseLeftButtonDown event for the control we added and add this to that script:

     IInputElement elm = sender
     start = e.GetPosition(elm)

Now we're tracking the location of the mouse when the user left clicked.  The next thing we need to do is check to see whether the user has moved the mouse the minimum distance required to initiate a drag and drop operation.  We'll do that in the MouseMove event, where you can add the following script:

     IInputElement elm = sender
     Point mpos = e.GetPosition(elm)

      if ( e.LeftButton = MouseButtonState.Pressed! and &
                    System.Math.Abs(start.X - mpos.X) > SystemParameters.MinimumHorizontalDragDistance and &
                    System.Math.Abs(start.Y - mpos.Y) > SystemParameters.MinimumVerticalDragDistance) then
                    StringCollection files
                    files = create StringCollection()
                    files.Add("C:\Users\bruce\Documents\PB12\DragDrop\wpfapp.out\simple_img_1.jpg")
            DataObject dataObject
            dataObject = create DataObject()
            dataObject.SetFileDropList(files)
            System.Windows.DragDrop.DoDragDrop( this, dataObject, DragDropEffects.Copy!)
     end if    

What we're doing here is getting the current location of the mouse as it's being moved.  We're checking to make sure that the user is still holding down the left mouse button (otherwise it's no longer a drag operation).  And we're looking to see if the user has move the mouse in both the X and Y axis more than the amount that is defined in the System Parameters as the minimum to initiate the drag and drop operation.  By default that is 4 pixels, but the user can change that.

If all that is true, we create a System.Collections.Specialized.StringCollection object and add one or more file names to it (one just for this example, and you'll want to point it to an actual file on your system).  We then create the System.Windows.DragObject and give it the list of files.  Finally, we call the DoDragDrop method on the DragDrop class, referencing our native WPF control as the drag source, pass in the DataObject and tell it we're doing a copy.  We're including the full namespace reference for the DragDrop class to ensure that PowerBuilder doesn't confuse it with it's own DragDrop event.

That's all there is to it.  Run your app and then attempt to drag from the native WPF control to the desktop or a directory in Windows Explorer and you should see the file copied to that location.

Windows Forms

Windows Forms is only a slightly different variation on the same approach.  Windows Forms doesn't have the DragDrop class, but the Windows Forms Control base class does have a DoDragDrop method that accomplishes the same thing.  To make this demo a bit simpler, we're going to implement the code directly on the clicked method of the control.  I'll leave adding in the logic to check for the correct amount of mouse movement as an exercise for the student.

So create a Windows Form target in PowerBuilder Classic, create a window for it to open, and drop some PowerBuilder control on the window (perhaps a listview again).  You're also going to want to add references to the System.Windows.Forms.dll and PresentationCore.dll in the target.  (Right click on the target, select .Net Assemblies

The code is going to look subtly different in the Windows Form application than it did in WPF.

#if defined pbdotnet then
          System.Collections.Specialized.StringCollection files
          files = create System.Collections.Specialized.StringCollection
          files.add ( "C:\Users\bruce\Documents\PB12\DragDrop\simple_img_1.jpg" )
          System.Windows.Forms.DataObject dataObject
          dataObject = create System.Windows.Forms.DataObject()
          dataObject.SetFileDropList ( files )
          System.Windows.Forms.Control control
          control = create System.Windows.Forms.Control
          control.DoDragDrop(dataObject, System.Windows.Forms.DragDropEffects.Copy)
#end if

The first big difference of course is that the code is wrapped in an "#if defined pbdotnet then" and "#end if" block.  That's what tells PowerBuilder Classic that the code in question isn't standard PowerScript, it's a variation of PowerScript that the syntax checker ignores and is handled directly by the PowerScript to C# compiler to handle references to .Net classes.

We're also using a slightly different class (System.Windows.Forms.DataObject rather than System.Windows.DataObject) to store the data.  We also need a System.Windows.Forms.Control to call the DoDragDrop method on.  Unfortunately, like with WPF, the compiler doesn't recognize the PowerBuilder control as a valid control for that.  However, it turns out we can simply declare an instance of that class within the script and use that.

Once again, that's all there is too it.  You should be able to drag the reference file to the desktop or a directory in Windows Explorer by dragging from the PowerBuilder control to either of those.

Win32 Classic

Well, if we can declare a control within the script to invoke the drag and drop in a Windows Forms target, we should be able to do the same thing from a .Net assembly and then just use that from a PowerBuilder Classic Win32 application.  And it turns out we can.  That's good, because doing it using the Windows API methods is a bit involved.

First, let's got back to PowerBuilder.Net, because we're going to use that to create the .Net assembly.  Create a .Net assembly target there, with custom nonvisualobject as the object type we're going to create.  Add System.Windows.Forms to the referenced assemblies, because we'll be using classes from that.

Create a method on the nonvisualobject (I called mine dodragdrop ) which take an argument of type string called filename and then add the following code:

System.Collections.Specialized.StringCollection files
files = create System.Collections.Specialized.StringCollection
files.add ( filename )
string dataFormat = System.Windows.Forms.DataFormats.FileDrop
System.Windows.Forms.DataObject dataObject
dataObject = create System.Windows.Forms.DataObject()
dataObject.SetFileDropList ( files )
System.Windows.Forms.Control control
control = create System.Windows.Forms.Control
control.DoDragDrop(dataObject, System.Windows.Forms.DragDropEffects.Copy!)

You can pass more than one file at a time through the StringCollection, I'm only passing one here for demo purposes.  You might expand on this example to support multiple files on your own.  The code is essentially what we used within the Windows Forms application.

Go back to the project object for the target now, because you have to tell it on the Objects tabs that you want to expose the method you just created as public in the assembly.  While you're there, you might want to modify the Class name and Object name to give them more non-PowerBuilder type names.  Those will just be aliases for the real methods when they are exposed in the assembly.

While you're in the project, go to the Sign tab as well.  If we're going to load the assembly to the GAC and let PowerBuilder classic access it there it must be signed.  You can't add unsigned assemblies to the GAC.  If we decide to leave the file in a windows directory and reference it at that location, it is still highly recommended to sign the assembly.  In fact the REGASM utility that creates the registry entries PowerBuilder Classic will use to access the assembly will warn you if the assembly is not signed and ask you to do it.

Signing the assembly is as simply as clicking the "Sign the Assembly" checkbox, and then hitting "New" so that PowerBuilder will generate a strong name key file for you.  Now compile the assembly.

Hopefully you're using PowerBuilder 12.5, because it by default makes the classes within it's assemblies COM visible.  We need that in order for PowerBuilder Classic to use them in an Win32 target.  If you're using an earlier version of PowerBuilder.Net there are ways to mark their assemblies as COM Visible, but it's beyond the scope of this blog entry.

What we need to do now is run regasm.exe on the assembly to generate the registry entries that PowerBuilder Classic needs.  For purposes of this demo, run it as follows:

     regasm.exe <name of your assembly>.dll /codebase /regfile:<name of your assembly>.reg

Codebase tells regasm to embed the physical location of the file in the registry entries, so we don't need to worry about adding it to the GAC.  The /regfile option tells regasm to write the entries out to a file rather than load them directly to the registry.  We may need to do that because we might need to modify the file.

The issue occurs when you are on a 64 bit operation system.  If you're still on a 32 bit operatin system, you can skip this paragraph.  What happens is that, depending on what particular regasm file you run (there are several on your machine) the utility may have created 64 bit registry entries rather than 32 bit entries.  The easy way to tell the differences is that 32 bit entries on a 64 bit system have Wow6432Node as part of the key name.  That is, most of the entries will be under HKEY_CLASSES_ROOT\Wow6432Node\CLSID instead of just HKEY_CLASSES_ROOT\CLSID.  If you are on a 64 bit system and regasm left Wow6432Node out of the key names, add them before adding the entries to your registry.

To add the entries to your registry, we just need to double click on the file.  You'll also want to examine the file to determine what ProgID was given to the class.  In my case, the ProgID was "pbdragdrop.PBDragDrop".  We'll need that information in just a bit.

Let's open up PowerBuilder classic, create a standard application target, a window that it opens and a control that we want to initiation the drag and drop from.  Once again, I've chosen a listview and for simplicity we'll just code the clicked event.  The code you want to add there looks something like this:

integer  li_rc
oleobject loo
loo = create oleobject
li_rc = loo.ConnectToNewObject ( "pbdragdrop.PBDragDrop" )
loo.DoDragDrop ( "C:\Users\bruce\Documents\PB12\DragDrop\simple_img_1.jpg" )
loo.DisconnectObject()
Destroy loo

It's fairly simple.  We just create an oleobject and tell it to establish a connection to our assembly using the ProgID we obtained earlier.  Then we just call the method on the class (your method name will probably differ) to implement the drag and drop.  At that point we can disconnect from the oleobject and destroy it.  The .Net Framework handles everything for us under the covers, allowing us to treat the assembly like a COM object.  If you want more information on it, you might check out the documentation on the Microsoft site, or some of the numerous articles I've written on the technique for PBDJ.

Summary

That's it.  Drag and drop from PowerBuilder applications to Windows Explorer for 3 different target types.  And particularly in the case of Win32 targets, using what I would consider to be a much simpler method than what is required to implement it using the native Windows API.  Hope you find it useful.

Wednesday, October 24, 2012

Using Autoincrementing Columns from a PowerBuilder DataWindow

Note:  This blog entry is basically the written version of a video I did for the SAP Database and Technology Academy.  You can either watch the video or read the blog entry, or both if you're so inclined.




Why use Autoincrementing Columns?

Before we get into how, first we need to look at why we want to use autoincrementing columns.  So let’s step back and consider how we want to create primary keys for the entities in our databases.  There are basically two approaches, using “natural” (intelligent) keys or “surrogate” (non-intelligent) key.  Natural (or intelligent) keys mean we pick some attribute of the entity itself that we believe provides a unique value we can use to reference that entity. We then use that key in foreign key references into other tables.

For example, let’s say we have a simple training database where we want to track employees, the classes offered, and if employees have taken the classes.  The entity-relationship diagram is shown below (see Figure 1).  If we’re using natural keys, we might use the employee’s
image001.png
social security number to identify the employee and the course number for the course to identify a course.  The courses an employee has taken are handled by the many-to-many associative table “training records” where the keys from the two entities (SSN from Employee and Course Number from Courses) are copied into the association records.  The Courses entity also has a “taught by” attribute that in addition uses the employee SSN to reference back to the Employee table to indicate who is teaching the course.

So far so good.  But then we run into some problems:

  1. Data entry error:  A year ago somebody entered the SSN for one of the employee incorrectly, and employee that not only takes courses but teaches some.  We just noticed the problem.  We need to change the SSN for the employee in the Employee table, but now there a numerous records in both the Courses and Training Records tables that have to be updated as well, and the foreign key constraints are making the update difficult.
  2. Changing requirements: The company decides that contractors need to take training classes as well, but they identify contractors through a contractor code rather than a SSN.  Suddenly the SSN field is no longer actually the SSN.
  3. Regulatory mandates:  The company determines that in order to meet privacy regulations all personally identifiable information (PII) must be removed from all but a handful of highly controlled HR systems.  The training database didn't make the list, and you now have to remove all records of SSNs from the system.

Sound far-fetched? It’s actually loosely based on a system at one of the companies I was at in my contracting days.

The way to prevent these problems before they start is to use surrogate key, arbitrary but unique values ( e.g., a sequential number, random numbers guaranteed to be unique, a GUID ).  Those are added to the entities as the primary key and propagated to other tables to represent the relationships (see Figure 2).
image003.png
You don’t show the values to the end users and apply no significance to them other than identifying the entities and the relationships between them.  (The minute you assign some other significance to them, you've turned them back into natural keys).  You can apply alternate keys or unique indexes on the natural keys that you still want to ensure are unique within an entity (e.g., SSN for the employee records). But you’ll find that updates to those data values, changes to what they signify or their removal are now much easier.

Different database vendors have different means of generating values that we can use for surrogate keys.  Oracle has objects called sequences that are separate from tables and generate sequential values.  What SAP Sybase Adaptive Server Enterprise (ASE), SAP Sybase SQL Anywhere (SA) and Microsoft SQL Server (MSS) have are autoincrementing columns.  (Microsoft SQL Server also provides functions to generate GUIDs as unique identifiers, but that’s beyond the scope of this article).

Creating them in the database

Creating an autoincrementing column in the database is fairly straightforward.  For ASE and SA, you simply define the column as a numeric data type (e.g., integer) and then for the default value specify ‘autoincrement’ (see Figure 3).

image005.png

For MSS, you handle it all through the data type of the column.  Each of the numeric data types had a subtype with a ‘identity’ suffix. So you could define a column as ‘numeric(5) identity’ (see Figure 4).

image007.pngAt this point, the database will start assigning incrementing numeric values to those columns each time a row in inserted into the table.  You don’t assign a value when you do the insert, you determine what value was assigned afterward and update your client with that.

Referencing them from the DataWindow


Once we have the column in the table it’s fairly easy to reference it from the DataWindow as well. The main thing we need to do
image009.png
is look at the Update Specifications for the DataWindow and ensure that the autoincrementing key is indicated as the “Identity Column” attribute in the update specifications (see Figure 5).  That tells PowerBuilder that it needs to get the value of that column for inserted rows and update the DataWindow buffer with the values.

How the DataWindow gets the value

OK, this is where things get a bit trickier.  So far it looks like magic.  You create a column in the database that autoincrements, you tell the DataWindow that such a column exists, and you’re done, right?
Not quite.  It’s important to know how PowerBuilder got that value; because it impacts the way you configure and deploy your application.  The secret behind this is a file in the PowerBuilder/Shared directory called PBODB[XXX].INI, where [XXX] is the major version number of PowerBuilder you are working with.  For example, if you are working with PowerBuilder 12.5, then the file is called PBODB125.INI.  Note that this file contains configuration information used to determine the value that was assigned to the new row even if you are not using the ODBC driver (e.g., you are using the OLE DB or SNC driver talk to MSS).  That bring us to the first important note:
If you are referencing autoincrement columns from DataWindows, you must deploy the PBODB[XXX].INI file with your application regardless of which driver you are using to talk to the database.
There is a section at the end of the file starting with the title “DBMS Driver / DBMS Settings” that explains what all the various entries in the file are and how they are used. We’ll walk through how it works for the autoincrementing columns in particular.
The first thing PowerBuilder does when it sees that you've reference an autoincrementing column that it needs to get the value for is looks to see which database you’re working with and then looks up the initial section of the file that references that database.  It also looks for the PBSyntax entry within that section.  For example, for SA, the entries look like this:

[Sybase SQL Anywhere]
PBSyntax='WATCOM50_SYNTAX'

For MSS:

[Microsoft SQL Server]
PBSyntax='MS_SQLSERVER_SYNTAX'

And for ASE I believe it uses this one (quite some time ago ASE was called SQL Server, but was renamed to distinguish it from its Microsoft cousin):

[Sybase SQL Server]
PBSyntax='SYBASE_SYNTAX'

What it then does it looks in the corresponding syntax section for an entry called GetIdentity. Once again, for SA you’ll find the following (there are a lot of other entries there, I've left them out for clarity):

[WATCOM50_SYNTAX]
GetIdentity='Select @@identity'

For MSS it has this. Note the alternate syntax and the comment explaining it.  We’ll get into that in a moment:

[MS_SQLSERVER_SYNTAX]
GetIdentity='Select @@identity'
; GetIdentity='Select max(IDENTCOL) from &TableName'
; Alternative technique if table has insert trigger associated with it.

ASE has much the same:

[SYBASE_SYNTAX]
GetIdentity='Select @@identity'
; GetIdentity='Select max(IDENTCOL) from &TableName'
; Alternative technique if table has insert trigger associated with it.

At this point we know the actual SQL statement that PowerBuilder is going to use to get the value of the identity column.  It queries for the value of a database session global variable called @@identity that holds the last value of an identity column assigned to any table for the current session.

And that’s where the rub comes in.  The main issue (as the comment indicates) is that if you have a trigger on the table, and the trigger inserts into another table that also has an autoincrementing column in it (perhaps a table that audits changes to the main table), the value PowerBuilder get’s back when it queries for @@identity may not be the value you want.

For SA this actually isn’t an issue.  SA treats @@identity as a stack.  So if you have a trigger that contains an autoincrementing column in it, SA will push the original value of @@identity down and put the value resulting from the trigger on the variable.  It will then have that value throughout the life of the trigger.  When the trigger ends, SA pops the new value off and restores the original value to the variable, the one that your insert actually created.

For MSS and ASE, it’s not quite such a happy situation.  Those databases will overwrite the value of @@identity that you generated with the value generated by the trigger.  That’s why there is the alternate method listed in the file.

GetIdentity='Select max(IDENTCOL) from &TableName'

It isn’t an ideal solution though.  Depending on how you have the isolation level set for your connection, doing a select for the max value of a column on a table might result in your blocking other sessions from updating the table or conversely may allow other sessions to update it resulting in your session still getting an incorrect value in response.

MSS has some other options that you might consider as well.  For example, this syntax should return you the last inserted value of the autoincrementing column without necessarily blocking access to the table.

GetIdentity='Select IDENT_CURRENT ( &TableName )’

You only really need to worry about this if you have a table you’re inserting records into with an autoincrementing column that has a trigger that does inserts into another table with an autoincrementing column in it.
Once you have even one table with an autoincrementing column that has a trigger that inserts into another table with an autoincrementing column way, you’ll need to change the PBODB[XXX].INI file to change the method used to retrieve the value for all tables with autoincrementing columns.
Otherwise though, just deploy the PBODB[XXX].INI file unaltered with your application and you’ll be fine.

Other things to watch out for


There are a couple of other things worth mentioning that you need to be aware of when using autoincrementing columns.  Both have to do with the way that the values are generated by the database.  When any session makes an initial insert into a table with an autoincrementing column in it, the database will not only generate the first value, but also a cache of a number of other values to be used for subsequent inserts.  The primary reason is to ensure that the generation of autoincrementing columns doesn't become a bottleneck when batch inserts are being done.  The number of values that get cached is something that can be configured for the database.

What happens though when the database is shutdown, either intentionally (for example to do full backups of the database files) or unintentionally (a crash of the database for some reason).  When that happens all those cached values are lost, and the next insert into that table will generate the next value after the last cached value, not the last used value, as well as a new set of cached values.  This is referred to as a ‘burn factor’ for the autoincrementing column, because of the values that are being burned (i.e., not used) for the column.
What this means is that, depending on how often the system is restarted, you might find huge gaps in the values being used.
This really shouldn't be an issue, because you shouldn't be using an autoincrementing column to generate something intelligent like a value that you need to have sequential without gaps (e.g., a PO number).  You should be aware of this though, and not surprised when you see it happen.
It also means that when you initially size the column that you will be using to store the values you will need to account for the burn factor.  Let’s say that you estimate that you estimate that your employee table will never, ever, have to contain more than 9,999 entries, because you’re a company with roughly 5,000 employees and you have low turnover.  If you size the autoincrementing column as a number(4), you may find out a few years down the line that you’ve run out of values that will fit in the column.
If you have a very high burn factor (because the database is caching a large number of values and you’re restarting the database on a rather frequent basis) you could end up taking more than 10,000 values to generate surrogate keys for a lot less than 5,000 records.
Just make sure that you allow a bit extra space in the data type for the autoincrementing column.

Summary


What you should realize now is how valuable autoincrementing columns are and how easy it is to configure the PowerBuilder DataWindow to take advantage of them.  There are a few things to watch out for as I’ve pointed out, but otherwise it’s a great feature that you should consider using.

Don't miss the SAP Database & Technology Academy

It's a series of short (e.g., < 15 minute) videos explaining how to use various SAP products.  Here's the first set for PowerBuilder, with more coming soon.

Friday, October 19, 2012

PowerBuilder Developer's Conference - Day 5 - TeamSybase breakfast

Normally TeamSybase gets together a couple of times during a TechWave, perhaps a day before it starts for a team meeting and then at least once more during the event for a lunch or dinner.  This year we were only able to get together on the last day for breakfast, and even then some had to leave early (or not even make it) because of early flights.

None the less, we were able to accomplish the primary thing we wanted to accomplish during the gathering.  We presented Sue Dunnell with an award recognizing her efforts in product management for the PowerBuilder product.


Thursday, October 18, 2012

PowerBuilder Developer's Conference - Day 4 - Lunch and Clubhouse

After my second session I headed over to the exhibit hall to get some lunch.  On the way, I heard someone shout out "free beer" from the Clubhouse area.  I don't drink, but it still got my attention, so I wandered over, just in time to find Dave Fish and John Strano offering a demonstration of PowerBuilder 15 for all of the SAP TechEd attendees.


Wednesday, October 17, 2012

PowerBuilder Developer Roadmap

Sue Dunnell presented the PowerBuilder Developer Roadmap at the PowerBuilder Developer's Conference at SAP TechEd earlier this week.  It was similar to the roadmap that was presented earlier this month in the webcast.  As at the webcast, the enhancements that were planned were broken up into two categories: "Near Term" and "Future Innovation".  It appears those are standard categories for SAP product roadmaps, SAP doesn't give planned release dates or versions that enhancements will be included in their products.   I saw the similar categories (and lack of specifics on release dates or versions) at other SAP product roadmap sessions at TechEd.

P1020559 (Custom).JPG
In the case of PowerBuilder, the enhancements were also designed in color to represent those that were "Original Plan of Record" and those that were "Additions to the Plan" as a result of SAP's growing involvement in the product.  The one major difference here from the webcast earlier this month was the addition of "PowerBuilder Classic enhancements" as "Mid-Term" and "Original Plan of Record" enhancements.  I believe that indicates that their not being included in the webcast was simply an oversight.  In particular, John Strano later demoed some of the PowerBuilder Classic enhancements that we've been shown before in road shows, particularly docking windows.