LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 06-17-2013, 07:04 PM   #1
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Rep: Reputation: 22
Dijkstra and a priority queue/heap


I don't get to code much these days. Maybe a couple hours in the evening, and occasionally a decent run in the middle of the night if I don't have to work the next day.

That said, I have this sample code of an implementation of Dijkstra's magic involving a heap type of priority queue. It's small, so I'll just post the whole bit for context:
Code:
/*--------------------------------------------------------------------------*/
/* Example #2: http://rosettacode.org/wiki/Dijkstra's_algorithm */
/*--------------------------------------------------------------------------*/
typedef struct node_t node_t, *heap_t;
typedef struct edge_t edge_t;
struct edge_t {
    node_t *nd;                 /* target of this edge */
    edge_t *sibling;            /* for singly linked list */
    int len;                    /* edge cost */
};
struct node_t {
    edge_t *edge;               /* singly linked list of edges */
    node_t *via;                /* where previous node is in shortest path */
    double dist;                /* distance from origining node */
    char name[8];               /* the, er, name */
    int heap_idx;               /* link to heap position for updating distance */
};


/* --- edge management --- */
# define BLOCK_SIZE (1024 * 32 - 1)
edge_t *edge_root = 0, *e_next = 0;
/* Don't mind the memory management stuff, they are besides the point.
 * Pretend e_next = malloc(sizeof(edge_t)) 
 */
void add_edge(node_t * a, node_t * b, double d) {
    if (e_next == edge_root) {
        edge_root = malloc(sizeof(edge_t) * (BLOCK_SIZE + 1));
        edge_root[BLOCK_SIZE].sibling = e_next;
        e_next = edge_root + BLOCK_SIZE;
    }
    --e_next;

    e_next->nd = b;
    e_next->len = d;
    e_next->sibling = a->edge;
    a->edge = e_next;
}
void free_edges() {
    for (; edge_root; edge_root = e_next) {
        e_next = edge_root[BLOCK_SIZE].sibling;
        free(edge_root);
    }
}
/* --- priority queue stuff --- */
heap_t *heap;
int heap_len;
void set_dist(node_t * nd, node_t * via, double d) {
    int i, j;

    /* already knew better path */
    if (nd->via && d >= nd->dist)
        return;

    /* find existing heap entry, or create a new one */
    nd->dist = d;
    nd->via = via;

    i = nd->heap_idx;
    if (!i)
        i = ++heap_len;

    /* upheap */
    for (; i > 1 && nd->dist < heap[j = i / 2]->dist; i = j)
        (heap[i]
         = heap[j])->heap_idx = i;

    heap[i]
        = nd;
    nd->heap_idx = i;
}

node_t *pop_queue()
{
    node_t *nd, *tmp;
    int i, j;

    if (!heap_len)
        return 0;

    /* remove leading element, pull tail element there and downheap */
    nd = heap[1];
    tmp = heap[heap_len--];

    for (i = 1; i < heap_len && (j = i * 2)
         <= heap_len; i = j) {
        if (j < heap_len && heap[j]->dist > heap[j + 1]->dist)
            j++;

        if (heap[j]->dist >= tmp->dist)
            break;
        (heap[i]
         = heap[j])->heap_idx = i;
    }

    heap[i]
        = tmp;
    tmp->heap_idx = i;

    return nd;
}

/* 
 * Dijkstra stuff; 
 * unreachable nodes will never make into the queue 
 */
void calc_all(node_t * start)
{
    node_t *lead;
    edge_t *e;

    set_dist(start, start, 0);
    while ((lead = pop_queue()))
        for (e = lead->edge; e; e = e->sibling)
            set_dist(e->nd, lead, lead->dist + e->len);
}

void show_path(node_t * nd)
{
    if (nd->via == nd)
        printf("%s", nd->name);
    else if (!nd->via)
        printf("%s(unreached)", nd->name);
    else {
        show_path(nd->via);
        printf("-> %s(%g)", nd->name, nd->dist);
    }
}

int main(void)
{
    int i, j, c;
#define N_NODES 4000 /* whatever number you like */
    node_t *nodes = calloc(sizeof(node_t), N_NODES);

    for (i = 0; i < N_NODES; i++)
        sprintf(nodes[i].name, "%d", i + 1);
    /* given any pair of nodes, there's about 50% chance they are not connected;
     * if connected, the cost is randomly chosen between 
     * 0 and 49 (inclusive! see output for consequences)
     */
    for (i = 0; i < N_NODES; i++) {
        for (j = 0; j < N_NODES; j++) {
            /* majority of runtime is actually spent here */
            if (i == j)
                continue;
            c = rand()
                % 100;
            if (c < 50)
                continue;
            add_edge(nodes + i, nodes + j, c - 50);
        }
    }

    heap = calloc(sizeof(heap_t), N_NODES + 1);
    heap_len = 0;

    calc_all(nodes);
    for (i = 0; i < N_NODES; i++) {
        show_path(nodes + i);
        putchar('\n');
    }
    free_edges();
    free(heap);
    free(nodes);
    return 0;
}
It's all pretty straightforward, and I kinda like the way the coder did that add_edge() malloc() bit so it starts from the end of the block. That way you cannot assume the edge_t's are all sequential, in fact they are reversed.

I'm having a little trouble understanding the priority queue. In set_dist() we have this:
Code:
    i = nd->heap_idx;
    if (!i)
        i = ++heap_len;
    /* upheap */
    for (; i > 1 && nd->dist < heap[j = i / 2]->dist; i = j)
        (heap[i] = heap[j])->heap_idx = i;
    heap[i] = nd;
    nd->heap_idx = i;
I thought this bit was a bit obfuscative, so, for educational purposes, I re-wrote it for clarity:
Code:
    i = nd->heap_idx;
    if (!i)
        i = ++heap_len;
    /* upheap rewrite for educational purposes */
    while ( i > 1 ) {
        j = i / 2; /* why i/2? */
        if ( nd->dist < heap[j]->dist ) {
            heap[i] = heap[j];
            heap[i]->heap_idx = i;
        } else
            break;
        i = j;
    }
    heap[i] = nd;
    nd->heap_idx = i;
First of all, does this re-write appear correct? It seems to work on my end, and I find I can read it better.

My real question is: what is it doing? To me it appears to be a rough-sort algorithm that is meant more for speed than accuracy. As long as the nd->dist is less than heap[j]->dist, it shoots toward the start of the heap and stuffs nd in wherever it stops. And why j=i/2 and not j=i-1? That's what makes me think it's optimized for speed, and not accuracy, because pop_queue() does a little sorting of it's own.

Peace and Cheer.
 
Old 06-17-2013, 07:42 PM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by mdooligan View Post
First of all, does this re-write appear correct?
Looks correct to me (and I have coded that kind of heap myself many times). But I didn't test your code, so I can't be certain I didn't miss a detail.

Quote:
My real question is: what is it doing? To me it appears to be a rough-sort algorithm that is meant more for speed than accuracy. As long as the nd->dist is less than heap[j]->dist, it shoots toward the start of the heap and stuffs nd in wherever it stops. And why j=i/2 and not j=i-1?
You need to understand the rules of a heap. Usually it is documented with one based indexing and the rule is:
heap[n]->key >= heap[n/2]->key for all n>1.

C uses zero based indexing. Most of my own heap code has used zero based. Then the rule must be:
heap[n]->key >= heap[(n-1)/2]->key for all n>0.

It isn't immediately obvious to me how your program switches to one based indexing. Maybe it just wastes the first position, or maybe I missed a detail. But the basic idea of i/2 (rather than i-1) is pretty much the point of a heap.

A heap is a "rough sort" for speed, but not at the expense of accuracy. If you think about the rules of the heap applying at every position, you should be able to see the first element must be first. Then the second might be second or third. The possible positions for each element become rapidly less certain for elements further from first. But the structure forces the first element to be first (which is what is important for a priority queue) and makes it efficient to "repair" the queue after taking away the first element.

I wonder about this bit of your code:
Code:
  for (i = 1; i < heap_len && (j = i * 2)
         <= heap_len; i = j) {
I would normally not include the part marked in red. Maybe it was supposed to cover heap_len so large that i*2 could overflow. But that doesn't work right. Otherwise, it is a wasted conditional branch.

On a related subject, why are i and j int rather than unsigned ? I don't see any possibility of negative indexes (even as limits of downward loops) in that code.

Unsigned makes the intent clearer.
Unsigned has one more bit of range (which might matter when using 32-bit indexes in a 64-bit program).
Unsigned would make pop_queue() faster if this is compiled for 64-bit, even if the sizes are small enough that everything fits under 2GB.

Quote:
That's what makes me think it's optimized for speed, and not accuracy, because pop_queue() does a little sorting of it's own.
That is often called "repairing" the heap. When you remove an element other than the tail, it must be replaced by something. So to remove the head, this code replaces it with the tail. But when you change or replace an element, its relationship to the ones around it may be wrong and it usually needs to move.

The most important operation on a heap is moving a changed value down the heap. Your heap code also includes the reverse operation, moving a new element up the heap, which is the one you asked about in more detail.

Last edited by johnsfine; 06-17-2013 at 08:20 PM.
 
1 members found this post helpful.
Old 06-17-2013, 07:45 PM   #3
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Rep: Reputation: Disabled
I believe j is the parent of i in a binary tree embedded in a fixed-size array. That way you only need log n comparisons rather than n comparisons to move something up through the queue. It's been a while, so I might be misremembering.

Kevin Barry
 
Old 06-17-2013, 08:27 PM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by ta0kira View Post
I believe j is the parent of i in a binary tree embedded in a fixed-size array.
I have seen many descriptions making the distinction that in a "binary tree":
leftChild <= parent
parent <= rightChild
While in a heap:
parent <= leftChild
parent <= rightChild

If you take a broader meaning of the term "binary tree" then you are correct. A heap is a structure based on up to two "children" per node and a comparison relation between parent and child.

A heap also is defined and typically used "embedded in a fixed-size array" using array positions rather than pointers to define the parent/child relationships.

But the parent/child relationship of a heap is often used in a linked structure instead and the basic concepts and performance of the heap still apply.
 
1 members found this post helpful.
Old 06-17-2013, 08:47 PM   #5
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Rep: Reputation: Disabled
I meant "binary tree" in the graphical sense, but that definitely clears it up.

Kevin Barry
 
Old 06-18-2013, 07:01 PM   #6
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Original Poster
Rep: Reputation: 22
Awesome. And thank you for the very informative posts.
Quote:
It isn't immediately obvious to me how your program switches to one based indexing. Maybe it just wastes the first position, or maybe I missed a detail.
Yes, it wastes index 0, but it does make the math a bit prettier.
Quote:
But the basic idea of i/2 (rather than i-1) is pretty much the point of a heap.
Ah yes. And that certainly would speed things up.

BTW it's borrowed code that I'm using as learning exercise, and working to adapt it to my needs. I have no idea who actually wrote it.
Quote:
A heap is a "rough sort" for speed, but not at the expense of accuracy. If you think about the rules of the heap applying at every position, you should be able to see the first element must be first. Then the second might be second or third. The possible positions for each element become rapidly less certain for elements further from first. But the structure forces the first element to be first (which is what is important for a priority queue) and makes it efficient to "repair" the queue after taking away the first element.
Perfect. That makes complete sense. That is what I was seeing in the routines: that rapidly diminishing accuracy going away from the first element. Left to my own resources I probably would sort the whole thing perfectly every time. But there's no need to because the routines get pounded so fast you can do a little sorting each time.

Yes. That "i < heap_len &&" bit does seem redundant, especially because we check j for the same condition. But I wonder also if the heap got absurdly huge, it might become an issue and we'd want to bail before j goes completely out of bounds. Anyway, I'm working with a 128x128 grid, and the edges are very controlled. None of this random connecting node 17 to every other node by accident, so I'm sure I'm fine without it.
Quote:
On a related subject, why are i and j int rather than unsigned ? I don't see any possibility of negative indexes (even as limits of downward loops) in that code.

Unsigned makes the intent clearer.
Unsigned has one more bit of range (which might matter when using 32-bit indexes in a 64-bit program).
Unsigned would make pop_queue() faster if this is compiled for 64-bit, even if the sizes are small enough that everything fits under 2GB.
Indeed. There is no way this needs negative numbers. Excellent observation. I think it's a sloppy/lazy oversight. I've done the very same thing with char instead of unsigned char, and been mystified by strange behavior until I found my sloppy typo.
Quote:
That is often called "repairing" the heap. When you remove an element other than the tail, it must be replaced by something. So to remove the head, this code replaces it with the tail. But when you change or replace an element, its relationship to the ones around it may be wrong and it usually needs to move.

The most important operation on a heap is moving a changed value down the heap.
Perfect. Because that was going to be my next question.

Thank you again. The fog is clearing. I can see that with these push/pop routines, the desired value (min/max/whatever) percolates to the top. Very clever. I don't like to implement stuff that I don't understand, but I feel now I can tweak this code to my needs. I have another project or 2 that might be able to use a heap type of thing also, whenever I might get around to it.

Peace and Cheer.
 
Old 06-21-2013, 07:23 PM   #7
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Original Poster
Rep: Reputation: 22
Dijkstra's heap works.

I thought this is a forum worthy update:

I finally got the time to implement the new routines. Some commentary:

1. On open grid, Dijkstra doesn't do a single straight line. It does the 'hockey stick'. Straight as far as it can go, and then diagonal to the target. Kinda ugly. But, still, that's what my old routine does without reduction. But that goes back to my original proposition #1: "Does a straight line work? If so, use it."

2. With a short path with only 1 route, old routine and new routine are comparable.

3. With long twisty path with many alternate routes, the difference is mind boggling:

Old routine with full reduction:
moves:177 pcost:2100 590093usec(3333usec/move)
New routine:
moves:165 pcost:1940 5696usec(34usec/move) (!!!)

Dijkstra is 100 times faster! And the more complicated the path, the faster it is comparably. I mean, it barely has to think about the problem. It's done. Bang. This is where Dijkstra's heap truly has it going on.

***

As a addendum, my original "drunken Irishman" routine (which is target-oriented bucket-fill with basic reduction) does this on the same map:
moves:206 pcost:2450 1467usec(7usec/move)

It doesn't think at all. So it easily beats Dijkstra in speed, but not in 'best path' by a mile, on a complicated route. It would be great for simple opponent that's maybe not too smart. But when the route is simple or straight line, it works better/faster than Dijkstra's.

As matter of fact, it appears that Dijkstra actually takes the same amount of time, regardless of the length and complexity of the path. So the question is: What is a quick and dirty way to guess when to apply Dijkstra's heap?

The fun thing about my basic old routine, perhaps I call it the "Leprechaun" instead of the "drunken Irishman", is that I can make it left-handed or right-handed, and it changes quite a bit depending on the map complexity. This makes it somewhat unpredictable. Dijkstra is very predictable on a given map. Excellent for some things, for sure.

After I clean up the code, I'll likely have a go at A*. And thanks again. That heap stuff is truly useful, once I got a clue.

Peace and Cheer.
 
Old 06-22-2013, 10:25 AM   #8
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,786

Rep: Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083
Quote:
Originally Posted by mdooligan View Post
Old routine with full reduction:
moves:177 pcost:2100 590093usec(3333usec/move)
New routine:
moves:165 pcost:1940 5696usec(34usec/move) (!!!)
Moves means the number of steps in a path? I notice that cost/step is not constant, but you have nice round numbers for the total cost. This implies you have a non-Euclidean cost structure (cost of a diagonal is √2 times the cost of moving straight in a Euclidean system). Just something to think about if you are finding paths that don't "look right".


Quote:
So the question is: What is a quick and dirty way to guess when to apply Dijkstra's heap?
I guess you could try the straight line until you hit an obstacle, whereupon you would fall back to Dijkstra. A* might be better in all cases, though.
 
Old 06-22-2013, 12:05 PM   #9
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
What problem are you trying to solve?

The program in post #1 computes the shortest paths from start to every other node. But the description in post #7 seems like it is about computing a path from start to just one other node. Which do you actually want?

Quote:
Originally Posted by mdooligan View Post
What is a quick and dirty way to guess when to apply Dijkstra's heap?
With what objective? My gut feel is that whatever your objective, you could simply improve that algorithm for that objective, rather than decide when to use it.

If you want path to only one target: For more quickly finding a decent path (withing cost 49 of optimal) you could simply make the search stop as soon as it finds any path to the objective. If you want the optimal path, you can still save a lot by stopping as soon as you prove the optimal path, rather than going on to find optimal paths to everywhere harder to reach.

With moderate added complexity (and again only for single destination) you could do the search from both ends to meet in the middle.

If you assume a moderate level of physical sanity in the map, then you could bias the search with Cartesian distance. That is much more general and powerful than kludges like trying straight line first. If straight line works, the biased search will quickly find it. If it almost works the biased search will quickly find the necessary detours. Note the double ended search tends not to help if you are doing the biased search. The double ended search finds much (but not as much) of the same benefit as the biased search, but finds it well for insane maps (such as the random one of the initial code posted) where the biased search has no benefit.

If the compute cost for Cartesian distance is too high, a crude approximation may be nearly as effective.

Last edited by johnsfine; 06-22-2013 at 12:08 PM.
 
Old 06-23-2013, 07:02 AM   #10
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Original Poster
Rep: Reputation: 22
Quote:
Originally Posted by ntubski View Post
Moves means the number of steps in a path? I notice that cost/step is not constant, but you have nice round numbers for the total cost. This implies you have a non-Euclidean cost structure (cost of a diagonal is √2 times the cost of moving straight in a Euclidean system).
Yes. At this time the grid is 40x55 (2200 points), spotted with random obstacles (usually about 900). Horizontal/vertical moves cost 10, diagonal moves cost 14. Close enough to √2 for a map of this size.
Quote:
I guess you could try the straight line until you hit an obstacle, whereupon you would fall back to Dijkstra. A* might be better in all cases, though.
First step is always straight line. The program doesn't even try anything else unless straight line is blocked.
 
Old 06-23-2013, 08:32 AM   #11
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Original Poster
Rep: Reputation: 22
Quote:
Originally Posted by johnsfine View Post
What problem are you trying to solve?

The program in post #1 computes the shortest paths from start to every other node. But the description in post #7 seems like it is about computing a path from start to just one other node. Which do you actually want?
Basically chase and evade through a very complicated map, so it's computing path from 1 node to 1 other node. Because it's game oriented, I'm actually looking for multiple ways to calculate paths. Some paths need to be optimal (Dijkstra) and others not so optimal to have an element of randomness/quirkiness to them.

What I'm finding is that with some techniques, the time required to calculate the path is linear. The path is not so good, but it's very fast. Other techniques, the time is the roughly square of the number of steps. The path is much better, but can be very slow to calculate long twisty routes. And with Dijkstra, it appears to take the same amount of time (~5500usec with this map) regardless of the number of steps to the target. At this point I'm working to improve that speed by improving the algorithm for what I need, but with short paths, there are better techniques, like you say.

Dijkstra really shines when calculating long arduous routes.

I was just happy that I got it working satisfactorily, and sharing a few observations about the results.

Peace and Cheer.
 
Old 06-23-2013, 09:31 AM   #12
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,786

Rep: Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083Reputation: 2083
Quote:
Originally Posted by mdooligan View Post
And with Dijkstra, it appears to take the same amount of time (~5500usec with this map) regardless of the number of steps to the target.
I think you missed the point of johnsfine's post. It's independent of target distance because you are using a version that doesn't stop when it finds the target, but keeps going and calculates shortest paths to all targets.
 
Old 06-23-2013, 07:15 PM   #13
mdooligan
Member
 
Registered: Feb 2009
Location: Vancouver BC
Distribution: Mandrake10/ArchHackery/Gentoo
Posts: 179

Original Poster
Rep: Reputation: 22
Quote:
Originally Posted by ntubski View Post
I think you missed the point of johnsfine's post. It's independent of target distance because you are using a version that doesn't stop when it finds the target, but keeps going and calculates shortest paths to all targets.
You're right, and I realized that after my last post. But at this point I'm enjoying it because it calculates the entire map in ~5500usec. Once I've called it with a source point, it's such a simple thing to extract the ideal paths to/from anything on the map to/from that point. It's brilliant.

Frankly, I need to rework my code to handle many paths, many sources, and many targets. Right now I can only do 1 source, multiple targets, but the structs can only hold 1 path at a time. I can toss them into an array of GLists, but there's some other things that I overlooked when I started writing it, and it's getting way too fugly. My code has issues, and needs a complete re-write at this point. Everything is clumped into 2 .c files, and it's out of hand. Originally, it was just a pathfinding experiment, now it's turning into a game. I have things running around my map now...ii ...ii ...ii

Anyway, Dijkstra's heap works, and that makes me happy.

Peace and Cheer.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Dijkstra's algorithm ddebbie90 Programming 2 09-12-2011 10:03 PM
X: warning; priority set to -1 instead of requested priority 0 HitmanX Linux - Newbie 5 12-13-2010 11:09 AM
Dijkstra Algorithm complexity xeon123 Programming 6 03-22-2010 12:46 AM
How to untar my tarred mail queue folder back to the sendmail queue directory again? Md.Abul Quashem Linux - Server 6 02-16-2010 08:32 AM
process priority,nice -- small question regarding high/low priority values beeblequix Linux - Newbie 1 10-11-2006 10:22 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 01:27 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration