Monday, December 02, 2013

This blog has moved


This blog is now located at http://funofmathblog.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.

For feed subscribers, please update your feed subscriptions to
http://funofmathblog.blogspot.com/feeds/posts/default.

Saturday, January 31, 2009

DD-WRT for Airlink101 AR430W

The DD-WRT open source WiFi firmware works just great on the inexpensive---I have picked up a few of these routers for $15 on sale at Fry's---Airlink101 AR430W Wireless G router, but the standard instructions for flashing the DD-WRT firmware are slightly incorrect. Here is a corrected version.

Labels: ,

Thursday, January 01, 2009

Simple Example of the Lenz Effect

Someone posted a YouTube video to Reddit Science of the Lenz effect (or Lenz's law) on block of aluminum in an MRI machine that was intriguing. Since I do not have an MRI machine or any other superconducting magnets, I followed the advice of another poster that the effect is observable with hard drive magnets and sheet aluminum. It worked. The magnets are from a dead DeskStar hard drive that failed with the "click of death", and the aluminum strips are from a soda can. Contrast the way a piece of cardboard falls between the magnets with the way the aluminum strip falls through.



Be sure to click on the video and then click the "watch in high quality" link below the video. It's much clearer.

Labels: , ,

Thursday, September 11, 2008

The (non)Partisan Truth

Factcheck.org has fairly analyzed all of the popular claims by both major parties in this election and recent elections. Is Obama really against nuclear energy as McCain boldly proclaimed? Find out. Is the picture of Sarah Palin in a bikini holding a rifle real? No.

Some partisan fun: vet McCain and vet Palin I suspect Factcheck.org does a better job of getting the full story.

Labels: ,

Faith in the Election

While there is something to admire about each of the candidates in the Presidential race, and I am sure all four of them believe they are doing the right things for the right reasons since only comic book villains commit evil for its own sake1. I cannot find myself supporting a position based primarily on fear of our enemies and how we can overpower them. All of the candidates should subscribe to the notion that we should not repay evil with evil but overcome evil with good. I assume this because they all report to be Christians, and Romans 12:21 is not controversial to my knowledge. It is not in the Gospels, but it does corroborate the admonitions against the use of power. I am pretty certain that "good" does not mean "good bombs", and if we want to be a shining beacon of goodness and power in the world, we have to give up our tendency to abuse that power. Our purpose is not the redemption of just ourselves but of those with whom we live, and our nation's purpose is not just the redemption of those fortunate to be protected as citizens but of those who despise us as well. This is the hallmark of the non-violent opposition. Evil must be overcome with good, and if the means are not good, the evil we fight will not be overcome but simply displaced. The result of that is either a never ending fight against ever more enemies or the conversion of ourselves into our own enemies.

How can we overcome evil with good? First we have to know that good does not come from fear. Concern is not fear. Preparedness is not fear. Fear is the animal response that we must hold onto those things that are precious to us. As Frank Herbert writes, "Fear is the mind-killer." What I call faith is the alternative to fear. The only certitude needed for this faith is this: Be not afraid. The good comes from faith, hope, and charity. At first this looks like the fear<------->love continuum ridiculed in "Donnie Darko", but I am not talking about a solution to bed wetting. I am talking about the basic position from which we approach other people. What about power? If I relate to others by means of my power versus their power, is that not something other than fear or love? Maybe. What is the purpose of relating by my power against their power? Is it desire to hold onto what is precious to me? Does it let me love that person? Charitable love is giving rather than taking, but only to the extent that it is not done intentionally to build up my own power in some version of "enlightened self-interest." If faith, hope, and charity are to govern our relationships, then the use of power, which takes from others, is contrary to our ability to love---not just those like us, but also those of the wrong religion and those we consider our enemies.

Overcoming evil with good requires us to give to our enemies, but what can we give them? We could give them money, but that is likely only to worsen the situation if we have not gotten past violence. What we must give them first is dignity. As human beings we are obligated to do that much. Calling them evil does as much good as flipping off a thunderstorm, and it does a lot of harm. They call us evil, too, and both of us get an ego boost of righteous indignation since we can each point to the horrible things that they have done to us, and they have, or at least somebody has, and we naturally like to reinforce our stereotypes, so we just stick to a simple "they". This indignation is the worst possible thing. I used to live on it though. I would think, "If only so-and-so would do this-and-that, we would not have this problem." "Thank goodness I am not like that wretch." Indignation leads to righteous anger, which I am told is the worst poison of the soul, and I believe that. Once I succumb to righteous anger, I can scapegoat anyone I want because everyone has done something wrong. I can cause trouble to anyone to "raise awareness" for my cause while saying "look what those people made me do (to you)". It is a huge self-deception, but we fall for it when we succumb. In order to give our enemies dignity, we have to give up our righteous anger and indignation. We have to act out of love rather than fear. To be the shining example, we act not because we expect them to repay us in kind(for even the tax collectors do that), but because it is the right thing to do.

We have to give them faith, hope, and charity because that is the only way we can redeem them and redeem ourselves.

1. Richard Mitchell, The Gift of Fire. A wonderful book available free online though I much prefer reading it in book form.

Labels: ,

Saturday, May 17, 2008

Carolina Anoles




There are plenty of bugs in our yard providing enough food to support several anoles. My older brother used to have one of these as a pet "chameleon". Here's the best I could get holding my Olympus C700 a couple feet away from this Carolina anole, Anolis carolinensis.

Labels: ,

Friday, March 21, 2008

An Implementation of Pessimistic Locking with JPA

I've worked with a Java application for several years that has relied upon pessimistic since before I was hired back in 1999. This was a time before Hibernate and certainly the newer Java Persistence Architecture. I have rewritten the data layer for some of our transactional processing a few times since then as the application changed from a dot-com to a B2B dot-com to a startup with a successful business, and now, to a department acquired by a large company. The data layer that existed when I joined was an interesting XML object database stored in SQL Server 7. All who touched it will not forget the powerful lesson we learned, so I will not talk anymore of that here. Actually I do need to mention that it used pessimistic locking for doing transaction processing on those XML clobs. That is the last mention of it. At some point we got a bit smarter and decided that we ought to be using our relational database to store records structured in tables defined by a schema.

After moving to a more typical object-relational mapping in Java, our system still depended upon pessimistic locking to implement transaction processing. This was accomplished in SQL Server 7, and later SQL Server 2000, using REPEATABLE_READ isolation for read-only connections and READ COMMITTED isolation for read-write connections. In a read-only connection the REPEATABLE_READ isolation ensured that shared locks in the database were held for the duration of a database transaction. Read-write connections relied on doing an UPDATE on rows before reading them to get an exclusive lock that was held for the duration of the database transaction. This effectively allowed concurrent reads, but did not allow writes to be concurrent with reads or other writes. This provides consistent reading of a header object and its line items by locking the header row with a read-only shared lock (REPEATABLE READ and a SELECT) or an exclusive write lock (READ COMMITTED with an UPDATE). It worked, but it meant that locks are held on data that is only being read.

SQL Server 2005 provides a feature called row level versioning that is similar to multiversion row concurrency (MVCC) in PostgreSQL and other databases. The SNAPSHOT isolation and READ COMMITTED isolation with row-level versioning are features of SQL Server 2005 that allow reading that does not take any locks. The SNAPSHOT isolation ensures that all of the data read in the transaction is unaffected by updates that occur during the read-only transaction. This means we do not need to take a read lock in REPEATABLE READ isolation in order to have consistency. When writing we still just need to do an UPDATE in order to get an exclusive lock on the database row.

Microsoft has a lot of information on the row version concurrency new in SQL Server 2005. When reading in SNAPSHOT isolation the database reads "all data that was committed before the start of each transaction." Updates are a little more complex:
Uses row versions to select rows to update. Tries to acquire an exclusive lock on the actual data row to be modified, and if the data has been modified by another transaction, an update conflict occurs and the snapshot transaction is terminated.
This is just what the we want. (In PostgreSQL the SERIALIZABLE isolation is nearly identical to SNAPSHOT isolation of SQL Server. The READ COMMITTED isolation of PostgreSQL is also nearly identical to that of SQL Server when the READ_COMMITTED_SNAPSHOT setting of SQL Server is turned on.) It may not be obvious even if you read the Microsoft link that this SNAPSHOT mode of SQL Server is server-side optimistic locking on the database server. The JPA specification (section 3.4.3) requires these behaviors of optimistic locking:

If transaction T1 calls lock(entity, LockModeType.READ) on a versioned object, the entity
manager must ensure that neither of the following phenomena can occur:

  • P1 (Dirty read): Transaction T1 modifies a row. Another transaction T2 then reads that row and obtains the modified value, before T1 has committed or rolled back. Transaction T2 eventually commits successfully; it does not matter whether T1 commits or rolls back and whether it does so before or after T2 commits.

  • P2 (Non-repeatable read): Transaction T1 reads a row. Another transaction T2 then modifies or deletes that row, before T1 has committed. Both transactions eventually commit successfully.



Now I shall explain pessimistic locking using these tools and the Java Persistence Architecture (JPA), which is really pretty simple. The JPA does not specify direct support for this. It has separate find() and lock() methods of the EntityManager. One could do this:
public Object findAndLock(EntityManager em, Class c, Object key) {
Object e = em.find(c, key);
em.lock(e);
return e;
}

This is pretty good, but it's only approximation of an atomic find-and-lock method. In a system with high concurrency, we will encounter JPA OptimisticLockExceptions. Let us presume we have an application with a legacy data layer (not JPA) that has this high concurrency, and let us also presume that we cannot remove all of this concurrency. (Often I find the serenity prayer is useful for constrained design.) The problem above is that the find() loads the row before we lock() it. If another thread also tries to find() and then lock(), only one can succeed, and the other will fail, either from the JPA locking rules defined in the JPA specification, or from the rules defined for SNAPSHOT isolation. The result of the failure is an exception, and the application code would have to retry the entire transaction, but that is one of the things we must accept we cannot change---not immediately anyway.

There is something we can do to avoid this race condition. We just have to do the lock() first rather than find() first. The lock() method takes an entity as a parameter, but we have a way out of the catch-22 because the entity need not be read from the database yet. The EntityManager.getReference() method gives us an object that looks like the entity we would get from find(), but the EntityManager.getReference() method can give us "an instance, whose state may be lazily fetched." So we use this code:
public Object findAndLock(EntityManager em, Class c, Object key) {
Object e = em.getReference(c, key);
em.lock(e);
return e;
}

The reference returned is a proxy instance created with java.lang.reflect.Proxy.newProxyInstance(). The proxy instance can return just the primary key from the entity reference that would have been passed to getReference(). That way the lock() method can get the entity key and lock the row for the entity without first loading it. This removes the race possibility and gives us true pessimistic locking.

I hope that some JPA implementations will offer a solution for pessimistic locking without requiring special API extensions. This solution is one that I've implemented in my own partial JPA implementation, and it works very well. I have not used any other JPA implementations or even Hiberrnate, so there may be a better way to do this. I have only see references to changing the transaction isolation level, and I hope what I have written explains why that is not a solution.

Labels: , , , ,

Wednesday, February 13, 2008

Consider the Source

It has been told to me many times. "Don't believe everything you read." Sometimes I still believe stuff that I simply should not. For instance, I've been trying to figure out why the javax.print.attribute.standard.MediaSize.findMedia() method starts doing strange stuff after my application has been running. According to the documentation for findMedia(), it tries to find the best match of the "standard" media sizes:
The specified dimensions are used to locate a matching MediaSize instance from amongst all the standard MediaSize instances. If there is no exact match, the closest match is used.

Indeed it seems to work when I try it, so I accept what I've read, until someone complains that the application starts failing to find the right media size. It seems that the definition of "standard MediaSize instances" changes after the application has been running for a while. I solve this by changing my code not to rely on MediaSize.findMedia(), and I'm happy that the bug is fixed, but what was really going on to cause the problem?

Fortunately, Sun has always (or least as far as I can remember) released source code for the standard library of Java, and I as a Java developer of course have a copy of their source code installed on my system, so I can take a look at what my call to MediaSize.findMedia() is really doing. The findMedia(float x, float y, int units) simply loops through a list of MediaSize entries, and looks for the one that is closest to the given dimensions. It then returns the "name" of the closest fit. Where this contradictions my assumptions is that the definition of "standard MediaSize instances" is the entire set of MediaSize instances EVER INSTANTIATED! This means that every time I call new MediaSize(x,y,units), I'm adding a new entry to the "standard" set of instances, and those do not have names on them, so findMedia(), which returns the name of the closest match, which start returning null, the name of one of my instances. Okay, now I know why my code was failing, but after looking at the code a bit more, I see a couple curious things. First I notice that the code was written by former C programmers because of the style. Then I notice that it's unnecessarily using some local variables of type double when it just needs float. Neither of these two things matter, but I notice such things.

However, when I think about things a little more, I realize that the "standard" list of media sizes is a java.util.Vector that gets an entry for every instance of MediaSize ever made. This list is never, ever cleared, and that means it's a memory leak, and one that is encountered every time the users of the site do a particular task. It is a good thing that memory is cheap and that I do not create these things in a loop that would allocate a lot of them that never get garbage collected. It does mean that I need to remove all calls to instantiate MediaSize objects as part of a user initiated action.

At least I feel I can can trust the source code.

Labels: , , ,