You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'DSA Minutes' Category

Shareholder Quorum

ø

Another weekend away from work, and no, it doesn’t feel great. Somehow it just happened. Fuck.

Anyway. I’m trying to figure out how to implement Alroy’s SQ algorithm. On Friday, I couldn’t get his website to work (where he had posted the R code), but he replied on Saturday with the fixed link, and so I’m now sitting (at Darwin’s again) reading through his code and trying to understand how it works. I wish I could see it in action with an example data set, because I’m not entirely sure what sort of data the function actually takes (while his documentation is much better than the code junk I got from Rabosky, it still leaves much to be desired). Is it just counts of taxon occurrences? If so, i.e. if the function doesn’t identify what particular taxa are in the subsample, this is going to make the proposed exercise of subsampling the morphospace very difficult indeed. Well. It’ll require rewriting the function.

From a little bit more reading and some monkeying around with the code (i.e. loading the function in R and passing some sample data to it), I find my suspicion supported—it seems as though the function simply takes an array of numbers representing the occurrence counts of different taxa in a time bin, plus the other function parameters, and returns the average number of taxa in the appropriate sized subsample (over the requested number of trials or iterations). This does make calculating the diversity curve fairly easy, but makes it that much harder to get the morphospace to subsample.

Should I just rewrite the function for my own purposes? It doesn’t seem all that complicated, really… Aargh! I am unmoored. I don’t know what to do nor what I’m doing.  The approach I was taking in constructing my own SQS function back in the day was quite a bit different, passing the full database back and forth between functions; Alroy’s approach of just passing an array of counts seems much more efficient, probably uses way less memory and is consequently faster? Although I do lose the ability to track actual taxon names. Maybe a combination of the two would be the way to do it—instead of the full database, have the function operate on a list of names?

Started by calculating Good’s U (by the original, simple formulation) for 2-myr time bins. There is very little variation in coverage estimated in this way. Correcting for this is going to do nothing for the diversity curve:

This is kind of an important plot, because it shows that implementing the SQS, at least in the simplest way, isn’t going to do anything to correct the diatom diversity curve from Neptune. I think I know why that is, too. Good’s U is measuring how well the standing diversity of a time interval is captured in the fossil record by looking for how many singletons there are, i.e. how many taxa only show up once. The greater the proportion of singletons, the more likely you’re still missing a lot of the standing diversity. Here’s the big but, though: basically all of the Neptune data is collected in m*n taxonomic charts where the m rows represent m slides prepared from borehole samples at m depth intervals, which the poor shipboard paleontologist scans through to check for the presence/absence or abundance of n different taxa. [This is the model of data collection that Dave Lazarus talks about in that recently published paper I reviewed for him at such great length last year.] This method makes it very unlikely to have singletons. I think that’s why the Good’s U values are all so high in the plot above.

The numbers would probably go down a bit if Alroy’s correction for dominant taxa were applied—i.e., take out the most abundant species, but that doesn’t really address the problem of the data collection method being strongly biased against singletons.

What if there were another method to estimate coverage, not as vulnerable as Good’s U to the bias from the Neptune-esque method of data collection? What would that look like? I suppose it could look at how many taxa show up in only one borehole, since that’s sort of the equivalent of an ‘occurrence’ of a macrofossil taxon in PBDB. That would probably work quite well for the most recent time bins, where there are dozens of boreholes, but most of the Paleogene time bins have only a couple of boreholes at most, and so many would have a very, very low coverage by that measure (I think). It might be worth a try, I suppose.

Grrr. This is not helping me make progress with the morphospace. I feel like I’m disappearing down a rabbit hole of distractions and unforeseen complications again. What I need is to get this paper done. I need to get my figures together, so that I can get the chapter written. So that I can move on. This is what I need to keep my mind focused on. I thought it would be straightforward to add diversity subsampling to the analysis of morphospace, but maybe it’s just too difficult. Maybe there are just too many complications with implementing SQ subsampling for this sort of data to apply it “straight out of the box”, as had been the plan all along. Well, fuck.

Maybe I just need to refocus on something else for a little while to let the frustration subside, because I’m pretty well boiling with frustration and rage right now. I also need to re-run the stacked 3D morphospace plot for my new cull of data so that I can plop that into my LaTeX document. Might be just the thing to do right now.

Life Just Got Busy

ø

Since DSA yesterday afternoon. Once off the phone with Beau, I rushed home to get Kati ready for her flight to DC. Spent a much longer evening than usual working on the invitations—glue, ribbons, envelopes, calligraphy, etc—and didn’t get to bed until well after midnight. This morning, thought I’d be able to finish the rest in a quick whirl before work, but it ended up swallowing up most of the day and it was 3:30 by the time I finally made it to work via the post office to drop off the 72 beauties in their handwritten, bright yellow envelopes.

In the midst of the mounting stress over the upcoming meeting with Mike Foote, and the PlanktonTech report I have to write,  and the status meeting with Andy to prepare for, and the résumé that needs putting together, and the R interface to finish up, I got an email from Dave Lazarus asking for comments on a paper he and John Barron are planning to submit to Nature. On—wait for it!—Cenozoic diatom diversity. Yes, believe it or not, I just got scooped once again. On work that I haven’t done yet, so I guess it’s not technically being scooped if it’s something you were thinking about doing someday but hadn’t gotten around to. Anyway, this is another thing on my plate and while I’m sure it’ll be fascinating to read it’s also another blow to my confidence for not getting that project done quickly, not to mention several hours next week down the drain at a time when I really need them. But, I also need Dave, so there’s no way I can blow him off—though I did email him back to ask how much feedback he wanted, and that I wouldn’t be able to provide much before late next week, because I’m busy. Humph.

Anyway. Now that I’ve vented some time-crunch stress, I’ll get to the task I started (but didn’t finish) yesterday—putting the keystone pieces into the Bridge of Rads database interface. Once in place, we’ll see whether I can jump up and down on it without it crumbling into a sad pile of rubble, as the dust swirls around me.

Made some decent progress over the course of the afternoon, putting all the major pieces in place—but noticed that there were quite a lot of little pieces missing altogether. This is the downside of “sit down and code” rather than developing a very detailed plan of what the program’s going to look like… But whatever, it’s not an enormously complex piece of software, and I can figure out as I go along.

Left work at 7:15pm, by which point I’d gotten much of the way through—although I have yet to do a first test run of the whole interface to see if it all works… perhaps tomorrow?!

The Last (In-Person) DSA…

ø

…but the beginning of a new project—DSA: The Book.

In spite of the bittersweet “last ever” backdrop of the meeting, it was very helpful—as summarized in Beau’s own words:

Andy had a look at 1 pager and responded as per expectations. Andy review was very positive, lots of good vibes. After successful previous three day push, still aware of temptation to stop and procrastinate. Continueth previous approach: make a note on the blog and then move on… Important to maintain positive and energized thinking over these periods. Also potential arrangement with Kati joining to work in evening removes possible familial stress on the three day efforts.

Ben reported not using Omniplan at all. Focusing more on getting things done. Realized he sinks too much time into making these plans, and doesn’t really use them. Nature of research is so different from what he envisaged it to be. Much less systematic and plannable than he expected. Rapid development of ideas and experiments is in principle much more suitable, although in practice not always possible. Applicable to some projects more than others. Omniplan channels a certain kind of thinking, sets up false expectations. Implies he can easily switch projects and just needs to put in the hours and be done. Not coincidental with the way he works, much more curiosity driven. Charles was trying to say this all along. However both OP and OF were useful in helping with back of the envelope calculations to check basic feasibility.

It’s clear that what is most important is motivation, not a framework. Motivation comes from success in achieving things and getting good feedback. Getting things on paper is even better – provides concrete feedback. This is why regular checking in with Andy and the 1 pagers has been so useful. Also recently snagged a big whiteboard and now has it next to desk with immediate status on every project.

In general, Ben reported feeling optimistic right now, much like back in January. Will know that things are going well and going differently when first set of results are in hand, and Andy approves and starts talking about how to write it up. At that point, can think about graduating. Don’t want to jinx anything at this point!

Ben is also thinking about getting a paper out of the confocal laser project – can he find a simple application and write at up? Andy kind of dismissive, but Ben should push on regardless. Potential collaborators in Germany? At some point in the future need to set aside some brainspace to sort out pitch to German folks.

Found myself drifting a little again, unsure about what to work on. Now that I’ve abandoned the strict 1-project-a-day schedule (which was itself a great improvement over the 3-projects-a-day schedule it replaced), and have my motivational sights set on the next three-day push, the rest of this week feels like it’s in a bit of a vacuum. Or, it would be, if it weren’t for the ever-useful deliverables set in the meeting this morning. I caught myself on the verge of slipping into the abyss of procrastination again, not knowing exactly where to start, so I turned to my gmail checklist of DSA tasks, and forged ahead…

The first task on the list was to compute the % coverage of genera in Neptune provided by those genera described in the Round et al., 1991 book. This, as a reminder, was in order to try to figure out whether I needed to code all of the Neptune genera for my morphospace analysis, or whether it would be sufficient just to consider the subset represented by those described in Round. The results are, well, sort of what I expected, though I’m not exactly sure how to interpret them.

What’s clear is that coverage does vary through time, and it does so systematically in the predicted way (i.e. the Round book more completely describes younger assemblages). Whether this bias is enough to mess up the results, I am not certain. My instict is it would: only about 50% of the genera in the database would be showing up in the morphospace in the Oligocene, but about 70% would be showing up in the Pleistocene—this could make the Pleistocene look more morphologically diverse (the sciencey way to say it would be, “characterized by greater morphological disparity”)… Anyhow, it took all afternoon, but it’s a satisfying plot! And it’s one thing checked off my list for this week already.

DSA, EPS 8 Grading

ø

After last night’s very disappointing CLS microscopy session, this morning’s DSA meeting with Beau was just the elixir I needed. He got me back on track, thinking positively about the months ahead, and perhaps more importantly, also about the past twelve months. Helpful as ever, Beau provided his notes on our conversation:

Confocal work: Ben reported that the inverse staining option wasn’t really returning positive results, and showed that the best pictures so far have come from the standard method. Sadly, he got news from the lab tech that with his departure at the end of the summer, the machines will probably be disappeared to various locations. This puts a hard 6 month deadline (at best) on the confocal work.

Presentation: Ben didn’t get a chance to work on this, due to various EPS 8 commitments, but sounded like he knew exactly what should be going where, and has plenty of sexy pictures to add. He needs to do some 3-D reconstructions of some of the newer confocal pics, but otherwise it’s all under control in that department.

Reflections from B: It’s quite a blow to hear that the machines will be gone at the end of the summer. I entirely appreciate the feeling that, despite all the hard work you’ve been putting in, circumstances appear to be conspiring against you in ways that are absolutely not under your control. But what is under your control is how you make productive use of the time ahead. Since you’ve assured me that it’s possible to get the measurements done within 6 months (possible or necessary, I guess both may be the same at this point!), you can now focus on pulling out all the stops towards getting this project whacked on the head during that period. Getting this done will not only leave you with a thesis chapter ready to go, but also prove a tremendous morale boost, which will help the other work you’re doing. The more you work on the machines, the better you’ll get at taking the pictures, and the better the dataset you’ve gathered will become. So hang in there, focus on the prize, and make sure you’ve got a clear path laid out for all the work during the summer.

For next week: Ben will run through his presentation, and for bonus points, also talk about potential milestones for the confocal work during the summer.

Spent the afternoon grading my part of the EPS 8 labs, then went home.

DSA is Back, Aids Return of Mojo

ø

After another useless-yet-mandatory safety training this morning, had a great DSA meeting with Beau. As always, his perspective was invaluable, and while I would have thought it almost unimaginable when I got out of bed this morning, I now feel fairly comfortable and almost excited at the prospect of the progress report and the work ahead. Most of all, in being forced to talk through the muddle of thoughts circling around the progress report in my head, I realized that I haven’t been completely idle over the past twelve months, and I’ve actually put in a very significant amount of effort with some not inconsiderable progress resulting from it. Even if I haven’t nailed my methods down yet, I’ve come a lot further in the past year than I was allowing myself to see. Here’s Beau’s notes from the morning:

Ben discussed progress report due later in April. Worked through all four major projects:

1. Confocal microscope work: talked about the fact that he has a reasonably clear idea of how to gather the data, but needs to nail that down. Once he starts to crank out some measurements, he can then start to figure out appropriate ways to convert the matrix of data points into a morphospace.

2. FIB-SEM work: talked about the many frustrations of machines being down and staff being AWOL. But it was clear that the methodology for this is really coming together, and with some better luck on the machine availability side, some raw data could be generated fairly soon. Also mentioned the inverse staining idea, which he will try and get to in the coming work. This could address both the problem of staining individual fossil fragments, as well as ensuring a high resolution picture.

3. Rabosky/diversity: Ben mentioned that around Christmas he felt very close to being done with replicating Rabosky’s work, but has let it slide since then. Ben should avoid the tempting narrative that the whole thing wasn’t of any value because it wasn’t “new” work. But acknowledging that the committee will be most favorable to breaking new ground, should emphasize the fact that the method is pretty much ready to go and will be useful in making headway in the coming year.

4. Radiolarian project: this has been stalled for a year, and Ben reflected on whether this means the project is not worth continuing.

Some outcomes for the progress report:

– Emphasize the positive things that you’ve achieved this past year, and how it provides a solid platform to gather data in the coming months and move into some serious analysis.

– Talk frankly about practical problems encountered with securing and accessing necessary equipment, and perhaps ask for more support in this respect from the committee.

– Start with the meaty stuff (the FIB-SEM), some sexy pictures, and end with the less exciting material that you haven’t made so much progress on. Perhaps speak frankly with the committee about avoiding taking on too much work.

– See if Charles can be patched into the session via Skype.

Ben also talked about a new microscope strategy of 6-9 pm several times a week, although that risks not having staff around when there are machine issues. Could work around this with a follow-up morning session if needs be. Emphasis at this point should be on cranking up the time allocated to important data gathering activities.

For next week:

Ben will produce a framework for the presentation/report, including images already produced and placeholders for other images, and an initial stab at text. He will at the very least start to work on the inverse staining approach, and at best will get some images to show.

The rest of the day: EPS 8 lecture, then grading this week’s EPS 8 lab. Home at 5:30, decided to postpone working on the next week’s EPS  8 lab until tomorrow morning.

DSA, 12/8/2009

ø

Ben

Progress on Deliverables

  1. Confocal prep—nope.
  2. Mesozoic rads—looked at it, but was uninspiring. The parameters Ben measured for his MSc work (size, shell thickness and porosity) – Ben can get at size from these pictures, but not shell thickness. Some measure of porosity possible. Pics are quite low res, though. Question is then, does this help? Not clear on whether the silica use matters so much. Basically only way to proceed is to weigh the sample. But probably will not continue with this line of attack.
  3. Started working on Zoe’s e-mail, but got confused about an isotope issue. Never sent it.
  4. Nope. Force majeur

It’s been a rough week, but at the very least Ben now has one less thing to worry about! We are sick of milestones – they make us feel small and miserable. Ben wants a real job that has milestones and achievements, but things really just aren’t like that. But that’s just not how it works. It’s not a 9-5 job, and forcing it into this form just backfires. Makes Ben feel stupid – feels like he reaches for goals that are never reached. Every time he leaves work, he feels like he’s not making progress. Only way to make progress is to immerse oneself in it. Making plans and schedules is a pointless exercise. Need to just get ‘r done. Pretending things are other than they are does not get them done.

Deliverables

  1. Complete training on FIB.
  2. Continue with R, replicating Rabosky’s results.
  3. Send e-mail to Zoe.
  4. E-mail Dave about ODP stuff.

Beaudry

Progress on Deliverables

100% of last week’s deliverables perfectly accomplished! Congratulations.

Deliverables

  1. Fix diversions issue—that’s a major task. Diversions should be within 10% of historic values.

Other Thoughts

WQTM section has been debugged, but hasn’t checked to see whether results are reasonable. Need to invest 3-5 days to check whether that’s the case. Should be able to finish this before the year’s out. Also would like to get Travis’ feedback on progress—his PhD project was a similarly large and complex model. Also wants to meet with Mike. Ben suggests focusing the conversation on what he needs to see in order to OK for June graduation, motivated by the need to organize immigration/visa status and money issues. Also wanted to ask about how much time he thinks is necessary for draft iterations of the thesis.

DSA, 12/2/09

ø

Beaudry

Progress Report

Fixed the first-year problem: it was a historic data set, not a problem with the model. CSU model not running properly yet, been consumed with calibration issues. Was looking at diversions, realized this was silly, looked at acreages instead. Spent 4-5 days improving fit between acreages generated by model and historic acreages. Only simulating six of the nine or ten crops being planted in reality, so there’s simplification involved. Spent time fiddling with this. It got better. Spent a lot of time building a calibration interface allowing futzing with parameters at the start of each run.

Re-did crop insurance process, rewrote algorithm. Was reaching limit of what simulation could do in terms of crop acreages, couldn’t fit correctly. Needed to add variables or change logic. Thought about adding risk tolerance. Had been trying to reduce the number of behavioral variables, but in the end added it because there was no other way to fit insurance into existing behavioral set-up of the model.

Then discovered some errors from months ago, fixed those. For example: spent three hours on Wednesday trying to work out why the cost of crop insurance couldn’t be plotted on the graph. Finally found he was asking for cost rather than income…

Just been struggling to calibrate, improving fit of model to reality. Came upon dead ends with acreages, especially vegetables (overplanting) and grasses (underplanting, both relative to historical data). Discovered yesterday that back in January he had designed a mesh that had 1000m grid squares, but updated in March to a 250m grid squares, but forgot to update size variables. That improved things—have to check today what the consequences are (e.g. there had been so little water available to each grid square).

In general, it’s a struggle to get the fit better, and time is ticking. Doesn’t know how to work harder or more efficiently. Beau finds himself in “tunnel vision” mode at times, eating three or more hours in a flash on something that may or may not actually be crucial to the end product. Ben suggested working on different areas of the model for shorter bursts of time (three hours at a time).

Ben

Progress

  1. FIB-SEM: booked Wednesday (today!). Spent Friday evening in the lab trying to get sample prepared. May have managed it (we’ll see!). First time around, next time will surely be easier.
  2. Didn’t get a chance to figure out confocal prep.
  3. No chance to look at the Mesozoic rad catalogue.
  4. Asked Andy about sieves – should be some in the lab, otherwise EPS will have some. Strange numerals, though. Don’t know who to ask…

Deliverables

  1. Figure out confocal prep.
  2. Look at the Mesozoic Rad catalogue. Should only take 10 minutes.
  3. Reply to Zoe’s e-mail which came during TG.
  4. E-mail Dave regarding isotope measurements and ODP samples, etc.

Got an e-mail while he was gone from Zoe – she’s made some silica isotope measurements. She had some questions. This is an interesting conversation; wrote some term papers in the past, but hasn’t really looked at it since. One standard model for interpreting isotopes in diatoms. Very closed, localized system which doesn’t fit with actual geologic time and mixing in the ocean. But no easy alternative. Thought Ben had a while ago: don’t just look at diatoms; compare diatoms and radiolarians, picking those radios which live at deeper waters – allows depth comparison, which allows determination of input values for silica models. Guy at Harvard now who makes isotope measurements, and does them very well. Would need to know from Dave how viable it is to get ODP samples etc. Feels a lot sexier than the current projects. Zoe measured the diatoms.

DSA, 11/11/09

ø

Ben

Deliverables

  1. Look at Mesozoic Rad Catalogue, come up with ideas for measurement (if at all possible).
  2. Continue literature search on pre-Cenozoic rads (with increase in velocity if possible).
  3. Update OmniPlans to accord with current project view. Add license to OmniPlan on MacBook Pro.
  4. Find out about sieving vs. filtering dissolved rad samples, experiment with prep/cleaning methods.
  5. Continue stepping through Rabosky’s R code.

Beaudry

Progress on Deliverables

Multi-year cropping—check! Required massive restructuring. Threw out 52-week-based irrigation plan, and rewrote irrigation to be based on soil moisture, now allows multiple irrigation plans and plantings a year. Separated crops into perennial and non-perennial. Crops can now be harvested whenever they’re ready, don’t require anything else (budgeting, etc.). Friday afternoon reached an impasse—crops wouldn’t harvest. Discovered infinite loop agents got into (to do with moving planting flags from agents to farm objects). Then on Tuesday, farmers were maxing out their acreage, and their was a random error—turns out it had something to do with the way the grid is read into a single java object at the very beginning.

Weather events—was surprisingly easy. Picked out 8 stations from NOAA with precipitation, downloaded montly historical weather data (precipitation-inches). Model runs will only be on historical timeframes, so predictability is less of an issue. Finding out where the stations plot on the grid was a challenge.

Miscellaneous loose ends—mostly on track. Will be done with things by Friday. For pricing scaling with acreage, there are no data to be found. Just using linear function now.

Other Updates

Had major disaster with Parallels and Time Machine (see Beau’s blog). Lots of meetings on Monday, none related to research; bad experience with a professor who wanted to know about agent-based modeling.

Had meeting with new committee member. Hadn’t read chapters, but didn’t bat an eyelid when Beau suggested graduating in June. Repeated what he’d said before in terms of comments—talked about referencing other literature, but he “doesn’t want to make you a prisoner of history”. Has a professorial poise Dawn never had, a lot calmer.

Reclamation will buy the university license for the programming environment, which means unfettered access to the debugger!

Ben’s Lens

Once again, an impressive record. You say you were lucky that things worked out, I say it’s a testament to your dedication and hard work that you managed to address such major and far-reaching issues with your model and substantially transform your code… in less than a week. Hats off to you sir! From my perspective, you have nothing to worry about in terms of your academic progress, and it looks like you are well on track. The one morsel of food for thought I could give you is to consider your mental and emotional stamina as you power on through the next few months. Yes, this is the final spurt down the home stretch, but it’s not a flat-out sprint. You can’t go working as hard as possible—you need to work at the hardest level that will allow you to keep going until the end. Your dreams might be a good indication to be mindful of sustainability in your exertion of effort. Decompression time is a necessary part of work, so try to keep those movie nights, long runs, bike rides, etc. as part of your week. Other than that, things look pretty sunny. You’re knocking down deliverables with a sniper’s accuracy and consistency. Keep up the good work.

Deliverables

  1. Finish tidying up, housekeeping on the model, e.g. plotting colors. So far (prior to the CSU meeting) been testing two scenarios out of five (baseline, and just water quality trading markets)—major testing will begin next week.

DSA Long-Distance, 11/4/09

ø

Beaudry

Progress on Deliverables

See the posting on Beau’s website.

Deliverables

  1. Implement multi-year cropping for Alfalfa: this will require some delicate changes to the budgeting and crop planning structures in the Farmer agent, as well as some work in the Crop and Farm objects. I’m going to try and tread lightly, since these elements have by far caused me the most trouble during development, and I’m a little loathe to make dramatic changes…
  2. Add in weather events to the model: this will require finding appropriate week-scale datasets for weather, hopefully on a regional basis, from 1999-2007. After that, though, it should be relatively straight forward to implement logic relating the weather to soil moisture and hence farmer decision making.
  3. Other miscellaneous loose ends.

Ben

Progress on Deliverables

  1. Started dissolution of Australia samples. Have now run out of acid, so can not dissolve samples any further, but dissolution is progressing a little faster than with the Ohio samples.
  2. OmniFocusenization completed. This is tremendously helpful, but I have found that this new form of organization—combining OmniPlan, OmniFocus, and a strict iCal schedule, makes the Planning/Context duality of OmniFocus somewhat superfluous. iCal tells me what project I should be working on, OmniPlan tells me what part of the project I should be working on, and the project task break-down in OmniFocus planning mode tells me what task I should be doing: this makes the context mode of OmniFocus somewhat unnecessary.
  3. First look accomplished, but no more beyond that. Made it clear that I need to step through the code line-by-line, following closely along with the methods description in the paper.
  4. Radiolarian sample list preparation stymied by Papers set-up activation energy. No progress here.
  5. ODP MRC database found and downloaded, but not looked at yet. Contacted Zoe and requested samples from her—talked on the phone Wednesday morning and agreed on authorship (she will be co-author on any papers arising) and samples (she will send samples for which she has surplus material, within the next few weeks).

Deliverables

  1. Progress on sample list for radiolarian project (at least have carried out searches for Cambrian and Ordovician papers).
  2. Buy more HCl for the radiolarian dissolutions.
  3. Talk to Jc about having thin sections of Ohio samples made.
  4. Download catalogue of Mesozoic radiolaria, take a look at picture quality, brainstorm about potential measurements to make.
  5. Start going through Rabosky’s R code in detail.
  6. Look through ODP publications & do quick literature search to see what data there are about radiolarian/diatom chert distributions through time.

DSA Notes, 10/28/09

2

Beaudry

Progress on Deliverables

Beau reported difficulty with decision-making of agents in a particular situation (farmers trying to decide whether to buy more quota or pay a fine), such that it no longer makes sense to make the decision based on the instantaneous cost-benefit ratio, but rather requires looking into the future for distant opportunity costs and benefits. Ben suggested swallowing programmer’s pride and hacking a work-around that is not general, thus not elegant, but deals with this particular exception to the general way of making decisions. In the spirit of the spiral, the workaround could be the a first pass and the full implementation of forward-thinking decision making could be the next pass (time permitting).

In the writing support group, Beau realized he has learnt something after all—the others did not seem to understand that hypotheses need to be tested incrementally in modeling, and that’s something you don’t understand unless you’ve worked through it yourself.

Debugging: using the demo version of the debugger improved things tremendously. Previously, Beau had to write output statements for everything he wanted to check in the code—now it’s automatic. Has sped up progress substantially. Got done what he wanted to get done, fixed a lot of bugs that wouldn’t otherwise have been discovered. Excellent!

OmniPlan: Beau presented his plan for the last phase of the PhD—very impressive. Has scheduled first draft of the dissertation to be submitted to the committee by the end of February, which leaves March and April for back-and-forth iterations of the thesis with the committee; defense to be held some time in May (in theory). Looks good!

One worry: currently not running model with allocations from CSU, but canned allocations. But this can be addressed in the calibration stage.

Deliverables

  1. Write CSU presentation. Re-do linked OmniGraffle flowcharts explaining structure of the model.
  2. Report on result of CSU meeting—what things they suggested can be reasonably incorporated/modified in the model? (In Denver all next week—at CSU Tuesday, U Denver Wednesday-Thursday). Report via blog post from Denver.

Ben

Progress on Deliverables

  1. Getting to grips with R: good progress on this, probably ahead of schedule. Looks promising, particularly relative to MATLAB. Worked through first 2 tutorials. Started to realize that it was teaching more basic syntax which Ben already knew. So figured out he’d just start doing stuff and then work out what syntax was needed.
  2. Planning: completed three plans, but hasn’t got around to translating each OmniPlan tasks to OmniFocus projects.
  3. Ohio samples: Ben has his own lab coat now! = real scientist. Donned it (excellent!), and found that the material is not quite dissolved yet. Topped up with acid (may not be necessary). Teachable moment: particular samples he has may not be suitable, too much iron or organic matter, heavy dolomitization. Andy seemed to think that the Australia samples might have radiolarians. Could lead to another chapter.

Deliverables

  1. Start to dissolve Australia samples (not more than a couple of hours work).
  2. Complete translation of OmniPlan tasks to OmniFocus projects.
  3. Diversity project: continue working on R. Have a first look through Dan’s code.
  4. Radiolarian project: start making a list of samples to request.
  5. Morphospace project: find ODP database. Start making a list of samples to ask for. Also write back to Zoe and ask for samples (also whether she has residues or prepared samples, etc)