In six months, my plan for the future will officially run out.
Now, if you happen to know me, you’ll know that I’ve never been one for planning for the future. I like to think of myself as being spontaneous, and being able to go with the flow. The alternate way to spin this, though, is that at very random times, I can be impetuous and impulsive, with no regard, or at least very little regard, for the consequences at hand. And I tend to favor the path of least resistance, which we all know eventually leads to the bottom of the sea.
But there are good, well validated, reasons for my devil-may-care attitude. If you think about it, we control very little about our lives. The outcome and course of your life is at least 50% determined by the personal characteristics and socioeconomic status of your parents. Things like what kind of high school you go to are completely controlled by (1) accidents of geography and (2) how much money your parents are willing to part with in order to stack the deck in your favor when you apply to college. Other things, like who you make friends with, and who (or if) you marry, are also largely accidents of propinquity. If you stop and think about it, the things you have the most control over are things you probably take for granted, and do over and over again, day and in day out, and sadly, rarely savor and enjoy.
Hence, the thing that my oldest friend taught me when we were still in high school: the simple pleasures in life are what count the most.
But this is where the path of least resistance thing clicks into place: at least 67%-75% of my extended family are in health care. So it was probably the most natural thing for me to follow in their footsteps.
There were many junctures in my life where this particular goal came into extreme doubt, but the decisions are too many, and there were too many twisting and turnings for me to even imagine what my life would be like now if I had chosen otherwise.
In the end, this one goal I set for myself, and at times took for granted, guided the course of the last ten years of my life.
Truth be known, I’m going to be three years behind schedule, but by my standards that a pretty decent margin of error. I’ve most definitely taken some interesting detours along the way. If everything had gone accordingly to plan, though, I would’ve been done with medical school and residency by the time I was 28.
On the other hand, sometimes Luck gives you small, but valuable and worthwhile, gifts. If I had started residency more than a year earlier than I did, I would’ve done my intern year without the protection of the mandatory 80 hour work week or the 30 hour (24+6) day. (As miserable as it was already, I can’t even imagine how awfully painful this would’ve been.) I I had finished medical school a year later than I did, I would’ve had to pay $1,000 for having the privilege of taking the clinical skills examination as part of the licensing process, which basically involves spending half a day interacting with fake patients. From what I understand, the only real thing it does is weed out the complete sociopaths, but of course no one is ever going to do a rigorous analysis of whether or not this test even succeeds at this modest task.
But I am certainly at an Iñigo Montoya moment. I’ve been so busy with becoming a physician, that I have no idea what I want to do with the rest of my life. I have this premonition that it’s not going to be an overly long span of time, in any case, since I don’t do very much to maintain my health, and I’m finding that it’s actually starting to fail in some ways, but whether we’re talking about the next few years, or the next few decades, I still find this great big yawning chasm of the unknown before me.
I suppose I also learned a lesson from the movie “City Slickers”, specifically from the late Jack Palance: one thing.
All it takes to have a good reason to live is to have one single goal.
The crux of the eternal static versus dynamic typing debate is just how much are you willing to let the computer (or more accurately, the language implementers) decide what you mean. Those who favor static typing tend to favor explicit direction over implicit intuitive understanding, and strictly-defined categories and hierarchies rather than free-for-all tag webs and interconnections. The static typist immediately recognizes that the computer (specifically, the compiler or the interpreter) is a non-intelligent entity that must be told exactly what to do, or else you’re liable to saw your own foot off. The dynamic typist, while not delusional about just how intelligent the computer is, is willing to have a little more faith in the language implementers, believing that they will do the Right Thing™ with the input that is fed to them.
It is apt that most dynamically typed languages are interpreted. It is not just metaphoric that the most important part of the interpreter is the parser. What I believe the goal of dynamically-typed languages is to be able to take input from a programmer, and output machine language that does pretty much what the programmer means. In other words, interpreters of dynamically-typed languages need to be able to understand human idioms.
This is intrinsically more likely to result in unexpected behavior, but we aren’t talking about obvious operations like addition or concatenation. We’re talking about more abstract things, like how certain tables in a database should be related to one another, or what to do with that extra parameter when the object passing a message isn’t expecting it.
The static-typist is most comfortable with telling the computer exactly what to do in these circumstances. This can be a time-consuming, and error-prone activity. The dynamic-typist is probably more willing to let the language implementers make these decisions, with the knowledge that what you thought you put in may not be what the interpreter actually thinks you put in, resulting in perhaps wildly erroneous output (but not a crash!)
Some authors comment on the fallacy that we’re talking about the difference between strong typing and weak typing. For example, any language that lets you implicitly cast between different types is de facto a weakly-typed language, and the most popular statically-typed languages allow you to do just that.
So the real axis is static vs. dynamic typing.
But I think another level on which to think about these things is the difference between brittle handling and flexible handling.
In the day and age when we have clock cycles to spare, with CPU cores running idle, I think we should really expect more of our computers. I really think that computer programming should be more tolerant of bad input. I’m not saying that a compiler should just ignore syntax errors, or let a grossly mis-cast variable go ahead and ruin your stack, but language implementers really should start using all this extra CPU time we have. Capabilities like self-reflection are the beginnings of this.