This is my start into the wide world of bread making (makes 2 loaves):Combine in mixer bowl:
- 3 cups whole wheat flour
- 1/2 cup nonfat dry milk
- some salt (little)
- 2 pkg. dry yeast (or about 2 tablespoons)
Heat in saucepan until warm:
- 3 cup water or potato water
- 1/2 cup honey
- 2 tablespoons oil
Pour warm (not hot) liquid over flour mixture. Beat with mixer for 3 minutes. Stir in
- 1 additional cup of whole wheat flour
- 4-4.5 cups of white flour
Knead dough 5 minutes using additional white flour if necessary. Place in greased bowl, turn, let rise until double in bulk. Punch down. Divide dough in half and shape into loaves. Place in greased 9×5″ bread pans. Cover and let rise 40-45 minutes. Bake at 375 for 40-45 minutes. If using glass pans, some say you should reduce the heat to 350. Also, some variations in cooking time may be due to elevation.
Although there are a few good video formats that almost everyone should be able to play, it seems the only way to guarantee that someone will be able to view your video over the Internet is to use some flash-video based service, such as YouTube or Google Video. The problem is that these services have a long lead time, slow upload time, and poor quality. You can do better by rolling your own. The basic components are as follows:
- Transcoder (such as ffmpeg)
- Flash wrapper (such as http://blog.deconcept.com/swfobject/)
The approach is to do the following:
- Transcode your video to dv format (it may be possible to go directly to FLV, but with ffmpeg for Mac OS X, this is the only method I found that works)
- Transcode your dv to flv – if you are going from 16:9 you can set ffmpeg to output at 640×352 to get a good equivalent.
- Edit the .html file that comes with the Flash Wrapper so that it points to your video. Also change the resolution to 640×352
- Upload the swfobject.js, flvplayer.swf, along with your .html and .flv files
That should do it! Here’s an example of what I produced from this effort: http://carson.oakenweld.com/leahNature.html
For those who already go to World’s Gym and enjoy pumping up their physiques, it likely doesn’t cross your mind that there is anything wrong with World’s Gym. The truth, however, is quite different for a large number of people such as me.The problem is that World’s Gym operates, shamelessly, on this principle: that people will join it on contract and then never come. It isn’t that they don’t want people to come and be satisified, its just that they know that most people don’t have time (as is the case with me) to go there. So, in a moment of weakness, a contract is signed, and then for the next six months to a year, money is slowly bled out of the customers pocket.Who does the bleeding? Paramount Acceptance. It isn’t a matter of fairness or honesty – if you sign a contract, you should fulfill it. The problem for me is the notion that they offer no real service, except a heated room with dead weight. Truth is, I admit that I despise contracts; but lets consider other contractual services, such as cell phones: yes, they fine you for terminating your contract early. But all the while, you do have a cell phone that is using the local towers, whether someone calls or not. Or how about renting a house – sure, you have to stick to your contract, but you can live in the house you are renting during the entire duration of your contract. At World’s Gym, you can hope for, at best, a couple hours in a sweaty stinky room – indeed, only a contract could keep me paying for this service.
Here’s a Fenisoft 3D raytracer, a sister product with the Fenisoft 3D scanline renderer. The difference here being, of course, that instead of using a scanline approach with a zbuffer, I “project” rays out to determine intersections.
My teacher made the comment that most people show off their raytracers by showing some spheres hovering over a checkerboard. Of course, I have seen this in my days with POV, so I couldn’t resist. Here it is, in all its glory: balls over a checkerboard (with a nice polygon – no meaning implied by using a triangle)
Here is a video of our mechanized, backlit lion. Note the 10 stars, one for each member of the Lion House. This shield has become a nice conversation point at parties.
Grid filtering is a useful technique in Artifical Intelligence that lets you determine the true state of your environment from a noisy sensor. As an example, imagine you were a tank in a 3D world that you had never explored. Of course, to make this contrived example less realistic, I will assume you can only “see” what is in the world through your noisy sensor, which basically tells you if there is an object near you. Thing is, by “noisy”, we mean it doesn’t always return accurate data. Instead, most of the time it tells the truth, but because it isn’t perfect it sometimes lies. Using Bayesian reasoning, we can then sample several times in one spot to pin down the true state. Where one single sample might leave us uncertain, more than say 5 or 10 samples should leave us almost certain (but we’re never really 100% sure, just as I feel in my dating life).
So we implemented this grid filter based of some code that Dr. Goodrich provides. We then produce an occupancy grid where black represents “likely” objects and white represents nothing. We then take it a step further to find corners, and from these corners infer boxes. Below are a few images that show the “believed” state of the world after scanning through it using our sensor and the grid filter logic. On top of this, we have overlaid translucent red or blue squares to indicate obstacles. Youll also notice tick marks around these squares, which are simply to indicate where our corner finding algorithm thought the obstacle edges were.
The cool thing is, this is really simple and useful, since just about any sensor exhibits some noise and hence uncertainty in its interpretation. This particular example works best for static worlds. The Kalman filter is better at modeling dynamic things, seems.
Below: Map of our “world” after using a grid filter and corner detection. Note how “noisy” this is, and yet we were still able to find all our obstacles (the dark black lines). Also note that the colored squares, which are our “believed” obstacles, sometimes cover regions that are larger than what our true obstacle are. For example, around the “L” shaped obstacles, we have a simple square. This works, because we don’t want to get stuck in an “L” anywise (our code for getting our tank out of concave regions isn’t, well, there, really)
This is one of those projects that seems much cooler to the maker than the observer. Here I have a biplane. What makes it special is that it was rendered using Carson Fenimore’s 3D renderer. If you have never heard of it, that’s ok – that probably won’t change, even after I finalize things with Pixar. It will probably remain anonymous, just as my stock and pay on the deal will be kept secret, leaving me superficially the same, as always, a humble man unwilling to brag.
This renderer was implemented entirely in c++ under Mac OS X and makes use of no external libraries, other than the STL for data containers and OpenGL for drawing 2D points (really 3D points with a Z value of 0). This means this renderer does the following:
- Polygon rasterization – quads and triangles
- Z Buffering – using patented hidden secret technology
- Geometric transformations (scale, translate, rotate)
- Arbitrary view (camera can be anywhere; specify view angle along with up, look at, and look from vectors
- Phong shading
The lesson I learned from this is: Don’t be deceived: simple things are often not easy to do. This was not easy to implement – there’s no “cookie-cutter” way to do your own renderer, especially if you have a weak background in linear algebra.
Biplane – in all its magesty
Apple – Notice the Shiny spot thanks to Mr. Phong
General – Very large and complex model
This is a new classic in my book: Combine stir fried chicken on a well boiled set of potatoes. Scrum-didily-upmtious!
A the Hough, now there’s an amazingly simple method for finding circles, lines, and other shapes. Lest this sounds too narrow an application, consider the wild possibility of finding a pool ball. The hough can find these in almost linear time. Amazing!
The idea is this: build an “accumulator”, which is really a voting array for parameters describing the object you wish to find. In the case of circles, you could have one accumulator for each approximate radius. Your task is then to go through the source image and find features which “might” be parts of a pool ball; in this case, edges might work. For each point on an edge, we “vote” in a radius around the point. If we do this for all points around a ball, the locus of the ball will have a high number of votes.
Here’s an example image with some pictures of the accumulator for circles of radius 32 and 48.
Paramater Space at radius 48
Paramater Space at radius 32
Shown below, is the final result of my Hough Transform for radius 32. Note that it missed one, but had no false positives. Not bad, especially considering I use a general approach that isn’t “hand tuned” to this image.