Wednesday, June 15, 2016

My Daily WTF?! moment: Negative reservation

Still trying to get my head around this one. Serious MS? Negative reservation? Too many downtown Seattle microbrews during lunch hour?

image

In the inventory transactions I find the line that causes this. It is a physical reservation for qty –1 and it is linked to a sales order.

The sales order has only one line for the item. It has an output order on it, with a reservation of +1 on a different location than above.

With reference to my article about the inventTrans and the rule of never deleting lines from there, that’s exactly what I’m going to do. But I mean….WTF?

The theory of negative reservations

Now gather ‘round, boys and girls and imagine what the world would look like if negative reservations would exist. What would they be like?

I imagine it would be like a place holder on a location. Don’t put anything on this shelf, because I am expecting a returned item from a customer to be placed in this position soon. That’s what it would say whenever someone tries to issue a put away to that location.

It would be like the numbered markers they give you at MacDonalds. You place them on your table instead of the Big Mac that you ordered and by the time your fries are cold, a waitress will come and replace the marker with a freshly assembled burger.

Quantum physical approach of negative inventory

Or maybe it is even more conceptual. String theory enthusiasts, pay attention. This might bring you one step closer to the Theory of Everything.

If a positive reservation is a marker on an item in inventory. Then a negative reservation is a marker on an item that is not in inventory. Basically you are saying that you will fulfill your requirement (e.g. sales order line) using any item that is not in your inventory,

Imagine the collection of all items. This collection has two subsets by definition. [Items that are in your inventory], and [items that are not in your inventory]. Each of these subsets has at least one other subset. [Items that at some time will be in your inventory] and [items that at some time will not be in your inventory] (if you run your business right than the majority of your inventory will be part of the latter. But consider the set [Items that are not in your inventory[Items that at some time will be in your inventory]]. We already know that these items, will be in inventory at some time in the future. If we consider time as another inventory dimension (and in a quantum physical world there is no reason that we shouldn’t), then we must be able to make reservations on this location that has inventory in dimension time x (where x is a time somewhere in the future).

Because that means we are putting a marking on an item that is not in our inventory, we are creating a negative reservation.

When you think about it, it makes perfect sense.

Friday, June 10, 2016

Security (part 1)

…and then you find yourself responsible for setting up a security and authorization policy in AX, or worse…you inherit somebody else’s twisted logic.

Up to version 6, Dynamics AX has always been lacking a solid authorization framework. In 2012 Redmond decided to make up for this, and boy did they make up. They didn’t just come up with a security system that actually works, they made it so that even the brightest minds will have a hard time coming up with a working solution that will satisfy all parties involved.

Badges? Badges? We don’t need no stinking badges!

So who needs authorization?

Users

First and foremost, a good authorization system will benefit your users. AX is big bad and complicated. By shielding your users from stuff they don’t need, it becomes many times more user friendly. Your users may even become more friendly towards you.

Don’t look at it as restricting users. Although there are certain no-go areas in the system, I do believe that a good hiring policy and proper training are much more effective than restrictive ERP systems. However, limiting a user’s privileges to areas that are actually used in his role helps to lay out the boundaries of a user’s functional process.

Auditors

Auditors don’t just want authorization (for users), they demand it. Segregation of duties, eyes only, SarbOx, you name it. Users should have appropriate clearance for any area of your system and if not, they have no business there. Furthermore it is important to have some kind of traceability. Who did what when and why? And that includes you and your nerd friends on the fourth floor, Brother Technicolor.

The Law

In some cases, and in countries with a legislation that has evolved past the one from Belarus, you are actually required by law to restrict access to certain system areas. Think privacy sensitive data in HR or financial data in GL. Take a little time to investigate the requirements in your neck of the woods or in the woods that you control. Better yet, convince your CEO to have the company legal councilor spend some quality time on the subject.

Your boss

So bosses and/or customers come up with some crazy shit and it’s not always easy to disagree.

Example from the past: “Sales people are only allowed to see customers form their own district. We don’t want them running off to the competition with our entire customer database!”

Sure, but…but..

So I built it.

You

No you don’t. Life is easier without having to worry about who is allowed to do what or people nagging about missing privileges. If you can get away with setting up a system without security, I say go for it. But if you don’t (or won’t), you do it right off the bat. The longer you postpone, the harder it will be.

You again

As long as you’re at it, ask yourself if you really need that administrator privilege for everything you do all the time.

Back in the days when ERP systems ran on UNIX systems and were administered from a console (the way the Lord intended it), I routinely logged myself out with the kill –a command (which kills any and all processes running under your account). Unfortunately I also got in the habit to routinely log in as root. Inevitably came the day when I killed all my processes as root and I slowly watched the system die while waiting for the phones to ring.

In other words, damage control. Limit yourself to what you really need on a daily basis. It only takes a minute to grant yourself sysadmin rights when needed.

Three General rules

1. Keep it in proportion.

Unless you are guarding the Coca Cola formula, the gold reserve of Pharrell Williams or the memoires of Dick Cheney, your security should be to scale.

2. Keep it simple.

Or at least as simple as possible. Remember, you are the one who has to maintain this crap and you are not going to want to do that for ever so the next guy should be able to understand it as well.

Keep it as simple as possible (but not simpler than that).

3. Don’t be a villain.

(gotta keep chillin’). The sense of absolute power can easily go to your head. Users will have to come beg and crawl for you to grant them access to sessions that they require to perform their daily tasks. Bribes have never smelt so sweet (I will do almost anything for home-made cookies) and who knows how far the new girl in Accounts Payable will go to see all vendor invoices?

But perhaps it is better for your karma to not be too restrictive.

To be continued…

Thursday, June 9, 2016

My Daily WTF?!

Haven’t had a WTF?! moment in quite some time now, but the planets have aligned, the moon is in the second house, and the spirits have woken.

Check this out. I still haven’t solved the mystery. According to the debugger HCMWorker.personnelId contains a totally different value from reality. Where does it get its information from? Some secret data cache? Another dimension? Just some lucky guess?

(click on the image to enlarge)

image

Monday, March 7, 2016

My daily WTF? moment: Batch tracking

 

Today left me in a permanent state of WTF? after having to deal with this little gem of a problem.

The issue started simple enough as someone was unable to pick an order while there clearly was sufficient inventory of a certain item.

image

It was relatively easy to find out that the problem was initially caused by a batch dimension that was active for the current inventory.

image

The pick line had no batch dimension on it. That made perfect sense, because the item tracking dimension group has no active batch tracking.

image

What did not make sense was the revelation that the item was counted into inventory with batch and all.

image

Now as far as I know it is not possible to even fudge a counting journal with a batch number and post it when batch control is turned off on the item tracking. The same is true for all other inventory movements. AX will scrutinize any posting attempt and “just say no”.

It is also not possible to change the item tracking dimension group on an item with transactional history.

WTF?

In the end, I created a correction job that changed the inventdim on the inventtrans and the inventjournaltrans, and then recaluclated the inventsum. This freed the inventory from its hostage situation, but I do hope our accountants don’t read my blog.

(Did I type that out loud?)

Wednesday, January 27, 2016

Pub/Sub options in AX

(And now for something completely different)

Someone recently pointed out to me that AX (as of version 6) has the possibility of using publishers.

For those of you who are not familiar with the concept, let me offer a brief, fully biased, explanation.

Pub/sub is a concept where a class A reports (publishes) an event rather than acts on it. Its subscribers, any number of classes, receive the event and perform whatever action it is they think they should do.

A publisher is therefore essentially an event splitter (Careful if you quote me like that. Some people might just bite your head off).

For old-school developers (Yes, I started out as a COBOL clown back in the 80s) this is totally dreadful. Structured programming is completely out the window, as execution becomes somewhat unpredictable.

Let me show an example:

The publisher class:

class Elsevier
{
    public static void eventHandler1(XppPrePostArgs _args)
    {
    }
    public void callme()
    {
        this.delegate1();
    }
    delegate void delegate1()
    {
    }
}

And three subscribers:

class ReadElsevier1
{
    static void method1()
    {
        info("Oh when the Saints");
    }
}

class ReadElsevier2
{
    static void method1()
    {
        info("Go marching in");
    }
}

class ReadElsevier3
{
    static void method1()
    {
        info("That's when I like to be in that number.");
    }
}

And then stitch it all together like this:

static void Job10(Args _args)
{
    Elsevier     elsevier = new Elsevier();

    elsevier.delegate1 += eventhandler(ReadElsevier1::method1);
    elsevier.delegate1 += eventhandler(ReadElsevier2::method1);
    elsevier.callme();
    elsevier.callme();
    elsevier.delegate1 -= eventhandler(ReadElsevier1::method1);
    elsevier.delegate1 -= eventhandler(ReadElsevier2::method1);
    elsevier.delegate1 += eventhandler(ReadElsevier3::method1);
    elsevier.callme();
    elsevier.delegate1 += eventhandler(ReadElsevier1::method1);
    elsevier.delegate1 += eventhandler(ReadElsevier2::method1);
    elsevier.delegate1 -= eventhandler(ReadElsevier3::method1);
    elsevier.callme();
}

Nutshell: Publisher Elsevier has a delegate method (event broker) that publishes every time the method callme is called. With the eventhandler we subscribe (and unsubscribe) the classes ReadElsevierX to the publications.

When I execute the job, I get:

image

Not quite what I was aiming for, and when I execute it again:

image

Oh the horror! Chaos has descended upon us. The order of events is determined by heartbeats and clock ticks, not through structure and programming lore.

What I like to think is that the execution of events is determined by processor availability rather than through processing wait time. From that perspective this method is the slicing of the bread in an era where single core systems are as extinct as COBOL programmers.

Of course you need some serious processes to rake in the benefits of this multi-threading technique, but it’s good to know that it’s available if you ever should need it. And in case you always wondered what these oddball delegate methods are doing, you now have the answer: PubSub has come to your neighborhood.

There is much more to say about the subject. PubSub is intended for asynchronous messaging between applications and will without doubt be available as such in the future. Imagine that an AX class can publish an event to any of your other software. The possibilities are unlimited.

Ff649664.despublishsubscribe_f01(en-us,PandP.10).gif

For now AX Won’t even allow publishers and subscribers to be on a different tier. If a publication is done server side, then the subcription class must also be RunOn server. So don’t run off with this concept quite yet.

More reading:

Wikipedia

How to use X++ Delegates in Dynamics AX 2012

Interesting article about pubsub in C#

Wednesday, January 20, 2016

Obsolete method

I learned something today Smile

You can render a method obsolete like this

/// <summary>
/// An obsolete method in a class far, far away.
/// </summary>

[SysObsoleteAttribute('This is the error message you will see when you compile code that callse this method even though it is obsolete.', true)]
public void obsoleteMethod()
{
    throw error(Error::wrongUseOfFunction(funcName()));
}

Now when you try to compile code that calls this method, you will see:

image

I don’t care what you say. I think it’s pretty cool.

Wednesday, January 13, 2016

Performance: Importing text or csv files in AX.

Importing text files. Is there anything left to say about that? Well, obviously there is since I am about to do it.

I was asked to look at an import procedure that was once created by a long gone developer who (surprise, surprise) had not left any documentation. The AX code was confined to a single class that runs nightly as a batch job and took about 4 hours if (and only if) it finished to conclusion without errors.

My mission: Make it better than it was: Better, stronger, faster!

(and if you recall that line, you too were born in a time when dinosaurs roamed the great planes)

No youngsters, this has unfortunately nothing to do with my salary. Anyway, back to the nuts and bolts and time for some reverse engineering.

The original code was a series of import methods like this:

void readxwnaaa()
{
    asciiIO         io;
    container       rowValues;
    Filename        filename = @'\\someserver\ccu\XWNAAA.txt';
    XWNAAA          xwnaaa;
   
    VehicleType     vehicleType;
    RecordIdNumber  recordIdNumber;
    SeqNumber       seqNumber;
    Kritnr          kritnr;
    KritWert        kritwert;
    boolean         isNew = false;
    ;

    io = new asciiIO(filename, 'R');
    io.inFieldDelimiter(';');

    while(io.status() == IO_Status::Ok)
    {
        rowValues = io.read();

        if (rowValues)
        {
            vehicleType     = strRTrim(strLTrim(conpeek(rowValues, 2)));
            recordIdNummer  = conpeek(rowValues, 1);
            seqNumber       = conpeek(rowValues, 3);
            kritnr          = conpeek(rowValues, 4);
            kritWert        = strRTrim(strLTrim(conpeek(rowValues, 5)));

            ttsBegin;
            xwnaaa = XWNAAA::find(VehicleType, recordIdNummer, seqNumber, kritnr, kritWert, true);

            isNew = ! xwnaaa.RecId;
            if (isNew )
                xwnaaa.clear();

            if (conLen(rowValues) >= 1)
                xwnaaa.RecordIdNummer     = conpeek(rowValues, 1);
            if (conLen(rowValues) >= 2)
                xwnaaa.VehicleType        = strRTrim(strLTrim(conpeek(rowValues, 2)));
            if (conLen(rowValues) >= 3)
                xwnaaa.seqNumber          = conpeek(rowValues, 3);
            if (conLen(rowValues) >= 4)
                xwnaaa.Kritnr             = conpeek(rowValues, 4);
            if (conLen(rowValues) >= 5)
                xwnaaa.KritWert           = strRTrim(strLTrim(conpeek(rowValues, 5)));

            if (! isNew )
            {
                if (
                    xwnaaa.Kritnr             != xwnaaa.orig().Kritnr  ||
                    xwnaaa.KritWert           != xwnaaa.orig().KritWert  ||
                {
                    xwnaaa.update();
                }
            }

            else
            {
                xwnaaa.insert();
            }
            ttsCommit;
        }
    }
}

There are a few things questionable about this code, but from a performance perspective the problem is with database round trips.

What happens is the following sequence:

Read record from file > process record > insert/update record in database. [repeat]

That’s all fine and dandy for a manageable collection of records, but in this particular case we’re processing close to one million records, which takes close to four hours on the batch server. Four hours in which lots of things can go wrong.

Faster is not just better because it takes less time. Faster also means that there are fewer interactions with other processes fighting over limited resources and therefore faster means more reliable.

To speed things up a notch, I want to do as much processing in memory as possible.Wouldn’t it be good if we could build a table in memory, do what we have to do and then move the whole thing to our database in one mighty blow?

Bet you know where this is going. In comes the Record Sorted List and it does just that.

The RecordSortedList class inserts multiple records in a single database trip.

Now if only there was a way to delete all records that are currently in the table without having to go through one million delete roundtrips…

Tell me, DAX Whisperer, does such a thing exist?

You can delete multiple records from a database table by using a delete_from statement. This can be more efficient and faster than deleting one record at a time by using the xRecord .delete method in a loop.”

What’s more, you can use delete_from to delete ALL the records in a table, simply by omitting a selection. I heard unconfirmed rumors that the downside of this command is that it doesn’t report back when it is finished, which potentially can lead to a false assumption that a table is empty when in fact it is not (yet). Something to bear in mind, but I have not witnessed any unexpected results.

So armed with my new class and a full clip of DeleteFroms, I reconstructed the code to:

void readxwnaaa()
{
    asciiIO     io;
    container   rowValues;
    Filename        filename = @'\\someserver\ccu\XWNAAA.txt';
    XWNAAA          xwnaaa;
   
    io = new asciiIO(filename, 'R');
    io.inFieldDelimiter(';');

    rsl = new RecordSortedList(tableNum(XWNAAA));
    rsl.sortOrder(fieldNum(XWNAAA, VehicleType)
                , fieldNum(XWNAAA, RecordIdNumber)
                , fieldNum(XWNAAA, SequenceNumber)
                , fieldNum(XWNAAA, Kritnr)
                , fieldNum(XWNAAA, KritWert));
    ttsBegin;

    while(io.status() == IO_Status::Ok)
    {
        rowValues = io.read();

        if (rowValues)
        {
            xwnaaa.clear();
            if (conLen(rowValues) >= 1)
                xwnaaa.RecordIdNumber     = conpeek(rowValues, 1);
            if (conLen(rowValues) >= 2)
                xwnaaa.VehicleType        = strRTrim(strLTrim(conpeek(rowValues, 2)));
            if (conLen(rowValues) >= 3)
                xwnaaa.SequenceNumber     = conpeek(rowValues, 3);
            if (conLen(rowValues) >= 4)
                xwnaaa.Kritnr             = conpeek(rowValues, 4);
            if (conLen(rowValues) >= 5)
                xwnaaa.KritWert           = strRTrim(strLTrim(conpeek(rowValues, 5)));
            rsl.ins(xwnaaa);
        }
    }
    try
    {
        delete_from xwnaaa;
        rsl.insertDatabase();
        ttsCommit;
    }
    catch
    {
        exceptionTextFallThrough();
    }
}

Now doesn’t that look a lot better?

What happens now is that the contents of our .csv file are read into a record sorted list which is quite coincidentally sorted exactly like the cluster index of the xwnaaa table. When and only when we are done with this, then the record sorted list is moved over to the database with one mighty insert, but not before we wiped out the contents of this table completely.

Okay, I changed a few more things, but let’s stay focused on the subject. For our daily load of 900,000 records the old version took just under 4 hours to process.

The new version does the job in….(drum roll)… just over 4 minutes.

I say that’s an AXcellent result. Thank you, record sorted list.