Exhaustion!
It's been a while...sorry.
I've been working my ass off, maybe 50+ hours a week at work, then teaching 6 hours every Saturday from 9 to 3.
One thing about teaching: all your energy goes into it, then you = drained. Also, I've taught before, but when the material is new (i.e. it's your first time teaching the class), the amount of prep work is staggering. I'm teaching the MOC 2555 Programming Windows Applications in C#, and the curriculum is all there for you, but the students won't learn anything from it. All the labs are adding a few lines of code to an already existing project, and not having to develop something - even small - from the ground up. The module on asynchronous programming is lousy, and definitely would have confused anyone reading it, even experienced C#-ers. Plus, more than half the students have little or no object-oriented experience, so I had to try to give an hour intro to that. C# is either very easy to learn if you have OO experience, or will be thoroughly confusing if you have none.
Anyway, I've learned a lot so far, though, just by teaching the class. And where I once couldn't care less about XML or not, now I'm converted.
As if this isn't all enough, my fiance and I are trying to get the house out here ready to sell so we can move to the East Coast. She's taking two classes and working 50+ hours a week for Pepsi, so we have no life until Sunday. And we can barely go out Sundays while Sopranos is still running. And I've got a SQL contract to teach, starting in two weeks, and MOC 2389 ADO.NET after that. Ugh...
I have to wait until early July to even think about the East Coast plans. Joey's going for a Vet Tech program and I'm trying for graduate school in Computer Science. I'm narrowing down to studies in computer vision, computational linguistics, and machine learning. I'm trying to locate research out there into context and/or concept modeling - that is, when the machine receives conversational input, it's one thing to process it, but what about retention? How might the data received be "stored"? Could we perhaps find a more univeral data-type that could house information garnered from conversation.
Then why computer vision? I think that the above is hard enough as it is, but certainly harder without some sort of other "sensory" data to associate to. I remember from a psychology class long ago that the eyes technically do (among many other things) edge detection, as well as filtering out redundant data, so to speak. I'd like to see the effectiveness of using two cameras to perform depth-perception. If the images from two cameras are used together, can we determine light-sources? Could we determine distance from shadows? Etc. Then object detection could be merged with the context / concept idea above by have two sources of data to associate with each other. I think it would be amazing to have a computer look at a basketball and a tennis ball, then be asked the difference in size between the two, and be able to say that the basketball is x times larger than the tennis ball, what the radii are, etc. The uses for a Hubble-like telescope could be fantastic.
Well, I've been on vacation since yesterday and want to continue slothing until Monday. Just wanted to post here during the break.
Thursday, May 25, 2006
Posted by ZagNut at 2:03 PM