Coming up on my defense, so no fun for a little while. Hopefully I’ll be back in May with new, interesting (to me!) tidbits and adventures. Happy trails!
Decided to publish the code from yesterday’s example, though wordpress only allows for certain somewhat odd file types to be shared, so I am just copy/pasting below. There are 5 functions, and as usual, you save each to the same folder in MATLAB, open that folder, and then typing
will generate a 3-dimensional billard game with 25 balls of masses between 0 and 10, speeds between 0 and 15 bouncing for 20 seconds, then play this. It also returns the position vectors [X,Y,Z] associated with this. I’d be glad to also publish the 2-dimensional version of this, though it should not be too hard to wade through my code and cut it down to two dimensions.
function [v1n,v2n] = collide3(m1,m2,v1,v2)
%solves the conservation of velocity and momentum for two balls of mass m1
%and m2 colliding at velocities v1 and v2, and returns their new
C1 = (m1-m2)/(m1+m2);
C2 = (2*m2)/(m1+m2);
C3 = (2*m1)/(m1+m2);
v1n = C1*v1+C2*v2;
v2n = -C1*v2+C3*v1;
function [A,W] = collisions3(X,Y,Z,rad,w,l,h)
%takes vectors X, Y, and Z where each entry contains the x- or y-position of
%a billiard ball and returns an n x 2 matrix A containing the indices where
%there is a collision (assuming balls have radius rad),
%and a matrix W containing the indices of which balls collided with walls
%in a w x l x h rectangle
C = X>=w; %oddly, I can’t find a way to make this code nicer. Checks collisions with walls
C = +C;
D = Y>=l;
D = +D;
E = X<=0;
E = +E;
F = Y<=0;
F = +F;
G = Z>=h;
G = +G;
H = Z<=0;
H = +H;
C = C+D+E+F+G+H;
for j = 1:length(C)
if C(j) > 0
W = [W;j];
X = ones(length(X),1)*X;
Y = ones(length(Y),1)*Y;
Z = ones(length(Z),1)*Z;
B = sqrt((X-X.’).^2+(Y-Y.’).^2+(Z-Z.’).^2); %a matrix whose (i,j)th entry is the distance between particle i and particle j
B = B<rad;
A = ;
for j = 1:size(B,2)
for k = 1:j-1
A = [A;k,j];
function [X,Y,Z] = billiards3(w,l,h,M,vx,vy,vz,X0,Y0,Z0,T,rad,tstep)
%simulates billiards with elastic collisions on a w x l billiards table. M
%should be a vector recording the (positive) masses of the billiard balls
%(the function will create as many balls as the length of M). vx, vy, X0,
%Y0 will similarly be vectors giving the initial x and y velocities of each
%billiard ball, and then initial positions. The program runs for T seconds
%with time step tstep. A reasonable setup is
X = zeros(floor(T/tstep),length(M)); %initialize the three position arrays, one column per particle
Y = zeros(floor(T/tstep),length(M));
Z = zeros(floor(T/tstep),length(M));
X(1,:) = X0; %set initial position
Y(1,:) = Y0;
Z(1,:) = Z0;
for k = 2:floor(T/tstep)
tryxpos = X(k-1,:)+tstep*vx; %here we check if any collisions will happen in the next step
tryypos = Y(k-1,:)+tstep*vy;
tryzpos = Z(k-1,:)+tstep*vz;
[A,W] = collisions3(tryxpos,tryypos,tryzpos,rad,w,l,h);
for j = 1:size(A,1)
[vx(A(j,1)),vx(A(j,2))] = collide3(M(A(j,1)),M(A(j,2)),vx(A(j,1)),vx(A(j,2))); %avoiding collisions with particles
[vy(A(j,1)),vy(A(j,2))] = collide3(M(A(j,1)),M(A(j,2)),vy(A(j,1)),vy(A(j,2)));
[vz(A(j,1)),vz(A(j,2))] = collide3(M(A(j,1)),M(A(j,2)),vz(A(j,1)),vz(A(j,2)));
for j = 1:length(W)
if tryxpos(W(j)) >= w || tryxpos(W(j))<=0 %avoiding collisions with walls
vx(W(j)) = -vx(W(j));
elseif tryypos(W(j))>=l || tryypos(W(j))<=0
vy(W(j)) = -vy(W(j));
elseif tryzpos(W(j)) >=h || tryzpos(W(j))<=0
vz(W(j)) = -vz(W(j));
X(k,:) = X(k-1,:)+tstep*vx;%updating the position with the “fixed” velocity vectors
Y(k,:) = Y(k-1,:)+tstep*vy;
Z(k,:) = Z(k-1,:)+tstep*vz;
%a program for visualizing billiard movement in 3 dimensions.
for j = 2:size(X,1)
%color the first particle red
%keep the rest of the particles blue circles
axis([0 w 0 l 0 h])
%draw the path of the first particle
function [X,Y,Z] = billiardgame3(N,massmax,vmax,T)
%generates and plays a game of billiards with N randomly placed billiard
%balls on a table. The balls have random mass between 0 and massmax,
%velocity between 0 and vmax, and plays for T seconds, with timestep .01.
w = 9;
l = 4.5;
h = 4.5;
M = massmax*rand(1,N);
vx = 2*vmax*rand(1,N)-vmax;
vy = 2*vmax*rand(1,N)-vmax;
vz = 2*vmax*rand(1,N)-vmax;
X0 = w*rand(1,N);
Y0 = l*rand(1,N);
Z0 = h*rand(1,N);
rad = 0.2;
tstep = 0.01;
[X,Y,Z] = billiards3(w,l,h,M,vx,vy,vz,X0,Y0,Z0,T,rad,tstep);
Inspired, as usual, by Leonid’s recent post, I decided to first write a script that would mimic his. After that, since I had all the numbers worked out, I wrote two more MATLAB programs: one that mimicked elastic collisions in 2-dimensions, and one that mimics them in 3.
In theory, you can specify the number of particles and their radius, as well as the mass, position, and initial velocity for each (I didn’t vectorize radius for some reason, so I cannot model balls of different sizes bouncing around). However, in practice I just generate random vectors for each of these numbers. The final aspect is that the domain I put the balls in was a pool table of 9 x 4.5 units, or 9 x 4.5 x 4.5 for the 3D version. This was just to make calculating the reflecting angle easier when a ball hit the wall.
As with Leonid’s code, mine works by checking whether the next step will cause any collisions, then adjusting the velocity vector so that the collision didn’t happen (using conservation of momentum and kinetic energy). This algorithm is not “smart” in the sense that by avoiding one collision, it might get pushed into a second collision which it does not detect, and if a particle gets going fast enough, it can reflect off a wall from a large distance (my time step is just 0.01). You can spot this in some of the figures below.
Anyways, here are some of the outputs. I did not go through the trouble of turning these into .gifs, but they play fairly smoothly. What happens is I simulate N particles of varying masses and velocities bouncing around in a 2- or 3- dimensional box for T seconds, then plot the path of one of the particles. The end position of all the particles, plus this path, is in each picture below (with the “tracked” particle colored in red).
Random thought on the economics of coffee shops:
When I am doing work in a shop, I typically won’t be buying food, but am aware that I am not the most lucrative customer they will get, so I try to tip well. This leads to an odd situation at my most-frequented place: I can get a pot of tea for under $4 with a cash discount, and so will typically get a pot of tea with a $5 bill, and leave the change as tip.
However, sometimes the barista will forget the discount, and here’s the problem: do I point out this mistake? I’ll still be leaving a (noticeably smaller) tip, and so no matter what, I am spending $5. In some sense, I am helping out the barista by pointing out the mistake, but then: is it really worth it to either of us to have this conversation over $0.50? The conversation has been surprisingly confusing in the past, especially when I take the extra money I just made a point of getting back and leave it in the jar anyways.
Also, I would assume that most shops price their coffee based on what (they expect) will maximize profits. I imagine the pricing would be different if the goal would be to maximize tips (I’ll leave $1.50 on a $2.50 coffee, but $0.60 on a $2.40 coffee). Likely the answer would be different again if the goal was to maximize profit+tips, but the tea is cool enough to drink now.
I was thinking about the topic that Rice’s VIGRE program has focused on in the past, namely how to visualize large data sets. Now, discrete data is not the greatest thing for an analyst, as opposed to, say, a manifold. Along this line of thinking, one might ask:
If you know that a data set comes from some manifold, is there a way to detect which manifold?
Towards answering this, I wrote a quick program in MATLAB that:
1. Draws N random points from a parametrized surface (so far just a sphere and a torus, but it would be easy to use any parametrized surface),
2. Draws a line connecting each point to its n nearest neighbors.
The program also “ghosts” in the parametrized surface to see how well this method does at illustrating the surface.
At this stage, the program is just something to play around with a bit, but I think the images you get from it are really great! One of the problems it still has is that not every vertex has n edges leaving from it (since the relation “is one of the n nearest vertices” is not symmetric, which is easy to see with a sketch). This also slows down computation since really I am drawing 2*N*n lines, even though only about N*n of them are displayed. Scroll down to see all the images.
As a further example of yesterday’s post, I was discussing multivariable calculus with a student who had never taken it, and mentioned the gradient. Putting our discussion into the framework of this post, here is what he wanted out of such a high dimensional analogue of the derivative of a function (note to impressionable readers: the function defined below is not quite the gradient):
1. Name the answer: Call the gradient D.
2. Describe the answer: D should be a function from , which takes a point in the domain, a direction in the domain, and returns the vector in the range. The idea being that if you had a map, knew where you are and in which direction you wished to travel, then the gradient should tell you what 3-dimensional direction you would head off in.
Certainly there is such a function, though in some sense we are making it too complicated. As an example we have some pictures of the beautiful hills formed by the function
The (actual) gradient of this function is
Plugging in a point in the plane will give a single vector, and then taking the dot product of this vector with a direction will give a rate of change for f at that point, in that direction. Specifically, if we start walking north at unit speed from the origin, the gradient will be (4,8), and I take the dot product of this with (0,1) to find that I will be climbing at 8m/s (depending on our units!)
Now the correct answer from my student’s point of view would be that the answer is (0,1,8), since this is the direction in 3 dimensions that one would travel, and that the correct definition for D would have
Of course there are more sophisticated examples of this. Suppose a function is harmonic. That is to say, . Notice that in order to write down this equality, we already named our solution u. But just working from this equation, we can deduce a number of qualities that any solution must have: u is infinitely differentiable and, restricted to any compact set, attains its maximum and minimum on the boundary of that set. Such properties quickly allow us to narrow down the possibilities for solutions to problems.
One method of problem solving I would like to focus on today is to name and describe your answer before you have found it. As a simple example, in order to answer the question “what number squared is equal to itself?”, we would:
1. Name the answer: Suppose x squared is equal to x.
2. Describe the answer: This is where the explicitly developed machinery comes in: We know that , so we deduct that x also has the property , and conclude that either x = 0 or x = 1.
As a second example, much of linear algebra is naming objects, describing them, and then realizing you accidentally completely described them. For example, suppose we wanted to identify every matrix with a number, and make sure that every singular matrix has determinant 0:
1. Name the answer: Let’s call the answer the determinant, or det() for short.
2. Describe the answer: det() should be a function from matrices to numbers, and at least satisfy the following properties: (i) det(I) = 1, so that the identity matrix is associated with the number 1 (so at least some nonsingular matrices will not have determinant zero), (ii) if the matrix A has a row of zeros, then det(A) = 0 (so that at least some singular matrices will have determinant zero, and (iii) the determinant is multilinear, which takes some motivation, will definitely respect identifying singular matrices.
Well, it turns out that these three properties have already completely determined the object we are looking for! If I had been greedy and asked (iv) each nonsingular matrix is associated with a unique number, then I would have deduced that no such map exists. If I had not included property (iii), then I would have found there are many such maps. It is a fairly enjoyable exercise to deduce the other properties of determinants starting from just these three rules.
Haven’t had a straight up “math” day in quite a while here. That ends now. While reading an article [PDF link], I came across the definition of a Young function: a function which is convex and where F(t) = 0 only when t = 0. Recall that for a real valued function, convex means that every secant line lies above the curve. So far, so good. Such a function might serve as a measure of how alike two functions are: rather than looking at (for example) the norm of u-v, we might first see what we can say about F(|u-v|), a more general question.
But here’s the proposition that made me pause: For 1<q<m<p, there is a differentiable Young function F and a constant C(m,p,q)>0 so that
1. , and
2. for all t > 0.
In fact, there was one more somewhat technical, but non-trivial assertion about this F (proposition 2.1 in the linked paper), but let’s focus on these two. Initially I was convinced that no such F existed, even satisfying these two conditions. Here’s how my erroneous thoughts went: suppose such an F were to exist. Property 2 gives us then that . Solving this “differential inequality” gives us that . A similar calculation will also yield that .
Now as a “back of the envelope” calculation, I tried plugging the bounds of F into property 1. Specifically, first I computed
Since q<m, the exponent is (strictly) smaller than 1, and the integral diverges (the indefinite integral looks like , where ). In particular, it does great from 0 to any finite number, but has a “fat tail”. Similarly, the integral diverges, but this time because its singularity near zero is too big (the indefinite integral is the same as the previous one, though now $\alpha < 0$. So this one does great from a very small number to infinity, but ultimately diverges.
Possibly I’ve given away the answer since I have emphasized my mis-steps, but here is an example of a Young function satisfying the first two properties. The trick is to construct the function out of two pieces:
for small t, and for large t. You can even select so that the derivative is continuous. Explicitly, suppose m = 3. Then we may set F(t) = t for t < 1, and for . Notice that F(t) is continuously differentiable, and that , so we know that .
One of my favorite courses in college was “Experimental Economics”, where we examine how real live humans differ from homo economicus- the perfectly logical, profit-maximizing agent of classical economics.
A good example is the ultimatum game, a 2 player game where one person has $10, and suggests a way to split that $10. The second player can accept, and then they go on their merry way, or decline and both players get nothing. Homo economicus would accept $0.01, since she would be better off than when she started, but most of us would be pretty insulted by such an offer. Conversely, most of us probably would not offer only $0.01, even if it is the “best play”.
Another game I am intrigued by is called the St. Petersburg paradox, apparently first posed and studied by some combination of the Bernoulli family. The idea is that we will play a game where I flip a coin until it comes up tails. If it comes up heads n times in a row, you win dollars. The question is: how much would you pay to play this game?
A reasonable way to approach answering this is to compute that you have a 1/2 chance of winning $1, 1/4 chance of winning $2, 1/8 chance of winning $4, and so on. Hence, your “expected winnings” are , which is a divergent sum. This means that if I offer to let you play this game for $100, a “supremely logical” person would accept. Or for $1,000,000. Or any number. On the other hand, I know very few people who would pay even $10 to play the game. The wikipedia article above has plenty of good explanations for why, but this strikes me as a great example of where human behavior diverges from what is “optimum”.
Since I also like creating some content every once in a while, I also have a python script that plays this game using a random integer generator. Playing the game 10,000,000 times, the average payout was $11.66, and the largest payout was $4,194,304, meaning 22 heads in a row. This is somewhat meaningless, as we already calculated that the average payout will grow without bound as I perform more trials, but gives a feel for how much the game might be worth intuitively.
from random import randint def game(): coin = randint(0,1) count = 0 tot = 1 while coin != 0: tot+=2**(count) count +=1 coin = randint(0,1) return tot def lotsogame(N): tot = 0 top = 0 for j in range(N): n = game() tot += n if n>top: top = n return float(tot)/float(N), top print lotsogame(1000000)
A quick addendum to my earlier posts on logical pirates. I had searched for quite a while to find the .pdf I knew I had read about this, but could not recall either the author, or the title. My search history is littered with “logical pirates” and “pirate riddle booty”. Great advertisements though. In any case, the author was Ian Stewart (not to be confused with the Ian Stewart that many young distance runners encounter as “the third dude who beat Steve Prefontaine in Munich”), and the article can be found here [PDF link]. A beautifully illustrated, easy to read article that does mine to shame.