Jun 142007
 

Tim pointed me at the article on devchix about barriers women face in tech communities; it’s certainly sparked a lot of interest and reactions out there.

My reaction was two-fold: one was to think “is that how most women are?” To understand that, you have to remember that I’ve spent my entire life past the age of 17 in groups that were predominantly male, first in physics, and then in computers. I’ve often discovered that I look at things one way and some woman I talk to about it will see it quite differently. So until I read some reactions to this article, I thought maybe it explained something about the way women-only groups work that I didn’t know about, since I’m not actually in any (even the bookclub I’m in is has men and wouldn’t feel right to me if it didn’t; the only groups I’ve been in recently that were only women were knitting classes and those only last a short time).

The second reaction was that she makes some fairly strong statements that are testable (there’s that physics background coming out):

I have experimented with this myself using a male pseudonym to post articles, and being told that the articles are informative, useful, great. Six months later I republish the exact same article, using a different title and a female pseudonym, and suddenly the article is horrible, technically incorrect, useless. It’s a fascinating study.

It’s actually a hard thing to test. Many people publish articles on their blogs, so they can’t suddenly change their name and gender for that; where else do people publish these days? How much of publishing information is about reputation, where the readers say the person has been right about other things in the past, probably is about this as well? That also doesn’t enable switching identities readily. I would like to see some actual data and testing of the proposition, and not just from one person.

Shelley wrote up her reaction; read both the articles as well as some of the comments and links for a fuller view. [At first I wrote “balanced view”, but until we know more about the issues, who’s to say where the centre (and therefore the balance point) is?]

  8 Responses to “Tech Women”

  1. I can only repeat what I’ve said elsewhere: the world is a better place when women treat men less like territory and men treat women less like appliances.

    Remove all sexism from the work place and the results are far far worse than if everyone practictes tolerance, compassion and self-restraint. This isn’t panglossian; it is practice and it is age-related.

    As to the lack of women, why is that the SGML community had lots of them? My distinct memories on entering the computer science workforce in 1980 was that it differed precisely because of the presence of women. Why is it the web online groups have struggled with this for so long. I noted this in VRML lists in the mid nineties and possibly, it has something to do with the demographics (age and education) of the members where education is not a good predictor of polite behavior.

    Shelley said it: respect for self and others where respect is not self-obsession OR finding all fortune in other’s eyes (see Shakespeare http://www.albionmich.com/inspiration/whenindisgrace.html).

  2. Thank you for saying “predominantly male” instead of the more accusatory “dominated by men” that many use when discussing this issue.

  3. You write “It’s actually a hard thing to test.” But actually I think it might not be that hard to test. How about some kind of randomized testing? Start two blogs, one with a female identity and one with a male identity. Write a stream of articles. After you finish any article (and solemnly swear not to revise it further, not even the tiniest bit), flip a coin and let the coin flip determine whether the article ends up as male or female.

    The trickiest part, it seems to me, is figuring out how to score people’s reactions. If you end up thinking you see systematic differences but you can’t put your finger on how significant they are, it would be quite nice to be able to compare your “13.8 positive” rating on one blog to “18.4 positive” on the other, and I don’t know how to do that. But if the difference in reaction is consistently huge (as the quoted “informative, useful, great” vs. “horrible” passage leads me to expect), there would be no need to sift through statistical properties of numerical data, you could just print out the comments on each blog side by side and let the differences in tone speak for themselves.

    That would not be a perfect test, and I can think of various problems. For example, perhaps one controversial post could end up leaving part of one blog’s audience persistently angry, so that even if you tested with 58 blog posts it wouldn’t be anywhere near 58 independent samples. But as I understand it, social scientists have developed a big bag of tricks for this kind of thing, so you might be able to use those tricks to improve my simple-minded idea and dodge problems like that.

    One trick that might help would be to write a private ancestral blog post, then edit it into a pair of fundamentally-similar public blog posts: keeping very-similar content, but adding as many superficial differences as you can think of. Then flip a coin to determine which superficially-different variant ends up on which blog.

    If anyone reading this decides to do something like this, by the way, I encourage you to begin by vowing to release (on the web anyway, I have no idea about the practicalities of journals in the field) the results no matter what you find, and even if you cancel the project partway through (in which case you just release results so far). One disturbing worry about published research is what weird distortions could get introduced by people’s decisions “nah, not worth publishing.” Since those decisions happen in private, it’s hard to estimate how they tend to be distributed, but sometimes people guess that they might have something to do with the persistence of some kinds of scientific errors.

  4. That would work – you’d probably have to set up both blogs as anonymous to get around the fact that curious people would hunt in search engines to find out more about the authors. Then the biggest problem becomes making people aware of the two blogs by figuring out how to link to them appropriately and “equally”. Probably the easiest way to do it would be to have a few people work as a group to write the articles, flip the coin as to which identity publishes it, then link to articles occasionally from their own blogs. It’s certainly an interesting idea, and I haven’t heard of anyone trying it.

  5. You’re right, I don’t have any very good idea how to make separate blogs be exactly equally noticed, though your ideas to make the approximately equally noticed could well be good enough. On the other hand, I do have an idea for dodging the problem: instead of writing two blogs, write a stream of comments on someone else’s busy blog, and randomize the sex of the commenter.

  6. Avoiding selection problems (not predisposing it to a given set of responses or the self-fulfilling prophecy) will be better with multiple contributors, but not perfect. The topics have to be gender-neutral to isolate it better from gender-hot buttons.

    You need a very precise definition of what you intend to measure first. I’m not sure the article that prompted this really has that yet. So if you aren’t testing the presence of specifically male reactions, then you are testing the ? for articles where those have gender identity for the author.

    And how do you know who is which given pseudonymous and anonymous response?

  7. Len, remembering that both women and men can be sexist, I would personally be more interested in knowing the total reaction, and not separating out the male or female reactions. You’re right, to design an experiment that would reliably separate out male and female reactions would be more difficult.

    William, I guess the commenting solution could work, although technical comments are rare. And given the original claim was that *technical* articles were treated differently based on the supposed sex of the author, this test should stick to technical and not delve into political or social articles or comments. That would be testing something different.

  8. True. If the measurement is total comments by type based on gender of the author, then sorting them is needless as long as the conclusions drawn from the data are similarly limited.

    I’d be interested in surveys of high school seniors and college freshman over career choices and the motivations. Megginson mentioned the uptick in entrants in the early eighties. I came back to the field in 1980 having worked a bit with computers in the very early 70s in college. I found it boring and went off to pursue music. Later when $$$ became a high priority after college, comp sci was the easy in to the engineering worlds locally and technical writing was a *build from spare parts* endeavor by the engineers. At the same time, comp-sci was considered elite and highly paid. CompSci really only became interesting to me when I began to think of applying it to problems that interested me unlike a piano which was always interesting in its own terms. In other words, if the key to engagement is interaction, perhaps the interactions interesting the current candidates are not perceived as being enhanced by computers. Again, a value of values issue.

    I do wonder if one of the drivers for the slopes in those curves for totals across cultures for both genders is status. My reaction to the web explosion of the 90s was the web became to that generation what rock n roll was to some of my own.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)

/* ]]> */