So my friends suggested AWS S3 as a good way to host static blog site, and I had been meaning to do this transition for a few years, but just never put to high on the to-do list.
I then hunted through backups looking for all the files, as I was super lazy and only had sub-folders of the site save on my current PC, and then uploaded all that to s3. I turned on CloudFront via the great temp domain: https://dyk3v11u6y5ii.cloudfront.net/nikon-patch/nikon-patch.html which still works!
Once the domain transferred, had more days while things propagated, before Route53 would let me add a cert, and after adding that is ages ~hour to propagate, so you can have the auto-fill in CloudFront to select the alias domain. All fun stuff, trying to working why it was not working..
Then I setup Lambda@Edge script to auto convert the blog/postname/ requests into /blog/postname/index.html which is how the files actually are in S3, not said so far, but AWS has really good documentation, that major flaw is you find the wrong doc’s when googling, but once you are seeing the current/correct Lambda@Edge doc’s the UI/Code makes sense. The Doc‘s I followed, and the script I more or less used.
I installed a 403 error handler, that loads my 404-page.html page, which has a Google Analytics Event so I can see which pages are getting 404.
I used to have a rather large collection of url rewrites on the Windows server, so once this was done my base page hits was more aligned with how it used to be:
Once I had the 404 page, I found my own blog posts where the major source of bad links, joys of a Window webhost, all files are equal.
I did a number of alteration to Hexo modules so the blog behaves as I wanted, and a large amount of mass editing of old posts, but those can have their own posts, in the future.
But so far, the monthly bills ignoring the domain transfer have all been sub $1 nzd/month (and $0.50 is the fixed Route 53 charge), so it’s a rather pleasing savings.
So on 30th of May 2021 I woke to some helpful emails, and an issue posted on github saying “your domain has expired”. Which was super strange, given I had a couple of years of domain registration to go..
If you went to any page on simeonpilgrim.com sure enough it appeared “like the domain was expired”
I logged into the hosting company “kiwigeeks.net” customer service area, and I could see I had a couple years of domain left, but when I tried log into the HELM interface on the servers I could not log in. Back to the customer service area, and I “rest” my HELM passwords, but those didn’t work.
It was starting to look like someone had hijacked the Window Server 2008 boxes (strange that).
The DNS records in the customer service area still showed correct readings, but using Googles Dig tool showed the where some other box other that the correct hosts, ns1.techservers.net & ns2.techservers.net if you are a previous customer of KiwiHosting and using their Windows hosting $10nzd/mth then you stuff might also be dead.
Anyways I posts a request for help:
But after a couple of hours of not being able to regain control of the host, I decided it was time to leave KiwiGeeks. I should have left WikiGeeks years ago, like when the stuffed up my domain renewal
So I pinged my friends on Discord, and the suggested solution was AWS S3 hosting (which worked well as I had already moved to static content via Hexo)
I kicked off a domain transfer from KiwiGeeks to AWS Route53 (it was a toss-up between AWS DNS services and Google, the Discord team where split of this, but in the end the costs are about the same but all in one shop, felt like it might work out better for me.). It turned out “lucky” that the domain transfer “code” functionality still worked, and then I got to wait. I was “super” excited to receive the automated email, from KiwiGeeks saying they would sit on the transfer for 5 days, for safety reason. I mean that is sensible, except this was an exception.. or maybe not, not sure I would have wanted the hijackers to have taken my account “super fast”. Anyways at the time it was super slow, and now it’s just done.
Anyways, KiwiGeeks.net support never got back to me, and I updated my request to “close my account”, and they still ignored me. So a large part of the this post is to demonstrate that I tried with them, and that I will not “pay” any outstanding debt if they are so stupid to try action it.
Well first post it is not, but sure first Hexo post. Well my old Wordpress blog was getting rather old. And I made a couple of mistakes in updating the server, which meant I’ve not been able to post for over a year.
Not that I have posted anything in 2016, but things have moved on in many dimensions, and at time I’d like to be able to document stuff (for myself) again.
Given my host cannot “upgrade php on the server” a wtf in of itself, to a version that support the current wordpress I need to get off it. So I almost randomly choose Hexo. And have bene altering the defaults somewhat, imported the old blog (and comment) and done a lot of editing to the formatting (with heaps more to go).
So now this needs some testing to see how it all goes live in production..
Anyways there is not comments - email me at simeon.pilgrim@gmail.com and I mainly add your comment if it’s make sense..
I have been using http://www.snowflake.net for a new data processing at work for a few months, and it’s just amazing, to be able to run large queries over large data-sets, and the ability to increase the cluster size, when doing development work to get faster turnarounds, that are not impacting the production cluster, brilliant.
One of the thing I have noticed is slower than I would like is joins based on tableA.time being inside a time-range of tableB.start and tableB.end when the time period being is in the months not days.
So the pattern mapping a value from TABLE_I onto all rows in the time span (not including the end)
CREATE OR REPLACE TEMPORARY TABLE WORKING_B AS
SELECT tp.u_id, tp.time, i.value
FROM TABLE_P tp
JOIN TABLE_I i ON tp.u_id = i.u_id
AND tp.time >= i.start_time AND tp.time < i.end_time;
For one set of data spanning 3 months the above takes 45 minutes on a small cluster for TABLE_P 65M/TABLE_I 10M rows. Where-as for a similar set of 4 days, and ~45M rows this takes 30 seconds.
So I add some TO_DATE(time), TO_DATE(start_time) columns to the two tables, and then added AND tp.time_day = i.start_time_day and the first query went to ~60 seconds. But I was missing a few million rows as my time ranges span multiple days…
So I did many things that didn’t work (like trying to use a GENERATOR with dynamic input) and settled on a simple solution
CREATE OR REPLACE TABLE TEN_YEARS_OF_DAYS(date) AS
SELECT DATEADD(day, (rn-1),DateADD(months,4,DATEADD(years,-10,CURRENT_DATE))) FROM (
SELECT row_number() over(order by 1) as rn
FROM TABLE(GENERATOR(rowCount => 365*10)) v);
CREATE OR REPLACE FUNCTION get_dates_for_N_days ( start_date DATE, end_date DATE )
RETURNS TABLE (date DATE)
AS 'SELECT date FROM TEN_YEARS_OF_DAYS WHERE date BETWEEN start_date AND end_date';
so this creates a table with ten years of data (moved 4 months into the future) and a table function that selects the rows from it, so I can do a lateral join on that function
CREATE OR REPLACE TEMPORARY TABLE TABLE_I_B AS
SELECT e.*, t.date as range_day_part
FROM TABLE_I e, LATERAL get_dates_for_N_days(TO_DATE(e.start_time), TO_DATE(e.end_time)) t;
So the above code creates another temp table with a row per table B record with every Day is a duplicate row, now we have more rows in the seconds table, but we can do a date based match to speedup the query.
CREATE OR REPLACE TEMPORARY TABLE WORKING_B_B AS
SELECT tp.u_id, tp.time, i.value
FROM TABLE_P_B tp
JOIN TABLE_I_B i
ON tp.u_id = i.u_id AND tp.time_day = i.range_day_part
AND tp.time >= i.start_time AND tp.time < i.end_time;
This code runs in 60 seconds and gives the same results as the 45 minute code.
Things to note, putting LATERAL table joins on a selects with CTE’s presently breaks the SQL parser, in fact even nested selects and LATERAL don’t mix, thus the extra tables with _B etc. Also CTE’s make life so much easier, but as you start joining to them a lot, performance slips, I have found where I do a complex join is a good time to output to a temporary table, and the performance again is crazy…
James has a new post called http://prog21.dadgum.com/206.html in which he wonders if lots of the idioms like const, and static or sealed classes are them to allow developers to protect themselves from themselves. I would have commented on his blog, but he doesn’t like comments, which is a completely sane stance to have, heck the only comments I get here are spam.
So I had always thought those idioms where there to allow complier optimizations because there was extra information that was not tracked on older compilers, or to make the code cleaner.
const variables are to a avoid macro’s “true evil” and to keep the type information. static allowed to keep the symbol table size down sealed allowed real optimization, as functions will never to replaced.
And these were needed, so the new language could perform “better” at some key benchmark used to once again prove that poor “real world” problems in assembly, C are faster than in new language X. Thus all problems should use C/asm.
The other day on Google+ I read a post from Ana Andres talking about coding Tiny Planets, and how it was running really slow in Matlab somewhere in the pixel drawing code.
That got me thinking about how I would code it in C#, and Ana had a good blog post showing one way of thinking about how the transformation was happening , one of her goals was to make an animated GIF so people could see the wrap bend happening.
Now the this was plenty fast enough, I was getting 2-3 fps on my laptop using only one core, so from a speed perspective moved on.
My first plan was to interpolate between the points (all though that would really slow, and look bad at the outside), but when I showed my co-worker my method, he said I was silly and should just traverse the output space, and find the input pixel that maps, so I did that and got:
I then had to put the bend code back in, as that was a special aspect needed for the animation and got:
I then started playing with sub-sampling to make the output less jaggie (F), and then I decided blend in YUV (or YCrCb) colour space, and finally settled on five point YUV average where you have a center of the pixel and the four 1/4 towards the pixel diagonal corners. Giving:
For now the GIF’s at coverted to videos, but if they are to slow I will have to go back to like links
Now improvement on the process would be to allow scaling and translation of the origin on the polar focus, this might be useful in an interactive UI to allow exploring the space.
Here’s my source code for Step H, it was fun to learn to make GIFs in C#.
We are porting our MSVC/Win32 applications to Clang/GCC/Linux and have just spent the morning tracing why our unit tests fail.
void BadExampleCode()
{
double a = NAN;
ASSERT(!isnan(a));
}
Under MSVC and Clang all good, GCC asserts. We added printf’s and looked at the assembly and the code was hard coded to 0.
Some googling found 2006 posts stating GCC -ffast-math did odd things with isnan, and it’s still a presently reported issue
This came about because we are using Premake, and had the FloatFast flag set, because that’s how our MSVC projects were set, and we don’t want to change those builds, so for now we have tweaked the Premake code for this flag under GCC, as it doesn’t make sense that you would ever want isnan to be hard coded to zero, that’s really not fast maths at all.
We also put a #ifdef for ffast-math to break if this flag is present in the future.
And by we I mean Dave, and I just sat and talking it though with him.
After a couple of weeks of hard code tracing Coderat nailed the where/how of the bit rates for Nikon DSLR movies recording are set. The sad thing is the source of the bit rates is just a simple function that had been documented for a couple of years, the completeness of it’s effect was just not understood. Sigh.
But now we have patched in higher bit rates. To that end 34mbps and 54mbps have been tested on a D5100, and I’ve tested 34mbps, 49mbps and 64mbps on my D7000.
The online patch tool now lets to install these rates, on D5100 and D7000, and only at 1080p, but I’m sure more models/modes will follow as requested.
I had no record problems with 64mbps (really 60mbps as I had no sound being recorded), using both my slow Transcend Class 10 card, or my fast SanDisk Extreme Pro, which I purchased for these very tests back in 2012.
So if you want to test or talk about this come on over to the Nikon Hacker forums.