Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Why oil is getting cheaper

7 Signs That the American Dream is Dying

Can Gene Therapy Cure HIV?

Transhumanism: No Gigadeath War

Transhumanism: The Future of Mental Health

The Global Gender Gap Report 2014


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

CygnusX1 on '2040’s America will be like 1840’s Britain, with robots?' (Oct 30, 2014)

dobermanmac on 'Philosopher Michael Lynch Says Privacy Violations Are An Affront To Human Dignity' (Oct 30, 2014)

GamerFromJump on 'Can Gene Therapy Cure HIV?' (Oct 30, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)

Khannea Suntzu on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)

CygnusX1 on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)

Khannea Suntzu on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(9908) Hits
(11) Comments

Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7768) Hits
(2) Comments

Should we abolish work?
Oct 3, 2014
(5377) Hits
(1) Comments

Why oil is getting cheaper
Oct 29, 2014
(5210) Hits
(0) Comments



IEET > Security > Cyber > Military > Rights > Privacy > Vision > Futurism > Former > Ben Hyink

Print Email permalink (14) Comments (5165) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


In Favor of the Functional Separation of Uploaded Minds and Simultaneous Mind Clones


Ben Hyink
By Ben Hyink
Ethical Technology

Posted: Sep 11, 2012

Exposure to some types of information can constrain one’s real options and impose responsibilities one might rather avoid. To set the stage for a flourishing culture of mind uploads, we need to enable people to live with a freedom from some kinds of potentially harmful information.

The saying, “What you don’t know can’t hurt you,” may sound condescending or even patronizing when delivered in most cases. Surely, just because you don’t know of a threat can’t mean that you are immune to it any more than a young child covering his eyes makes himself disappear to playmates in a game of hide-and-go-seek. However, there are cat-and-mouse games in which one may be exposed to much greater risks as a knower of secrets than as a civilly-acting oblivious person of good will. The heaviest price of knowledge may be the burden of having to make use of it in determining one’s own actions with an understanding of the effects they will have on others. One may find oneself with power and responsibility in a negative sense.

What started me on this line of thinking was a section of the book, “Where good ideas come from: The natural history of innovation” [1], by Steve Johnson. In it, he describes independent events that occurred in the U.S. before the September 11th attack that could have alerted the FBI in time to intercept the terrorists before they struck. He blamed the intelligence failure on what he considered to be an outmoded practice of hierarchical filtering of intelligence information before it is flagged as important. Instead, he suggested radical openness and non-hierarchical access to intelligence information to help agencies recognize patterns better and faster than at present. He expressed disappointment that after the September 11th investigation and hearing no such changes were made in how the agencies process information.

I don’t consider myself qualified to analyze best practices of intelligence communities, but from the outside I have a sense that the more coherent and comprehensive one’s picture of highly classified information becomes, the less freedom one would have in important respects. I don’t envy people in high offices who do access that information and have to make decisions in the interest of their constituents and their nations that may never be widely known. The military-industrial-intelligence complex has grown tremendously since WWII, and some legacy elements and practices would be practically impossible for any particular leader or small group to change.

The international community Is semi-anarchic, with rogue states that do not have our qualms about using the “dark side” to accomplish ends, so to some extent people at the top must negotiate paths forward not just for the greater good but also the lesser evil, acknowledging that unavoidable evils are inherent in the work. Some of that work includes secret development of emerging technologies in ways that will transform the future of human culture and the world economy. The innovative independent reporter Gina Rydland of Future Extreme Media [2] has made it the agenda of her new organization “The Virtual Center for the Study of Progressing Technologies” [3] to assist laypersons in analyzing anticipated developments and proposing ways to promote technology introductions that will result in desirable outcomes for humanity. This article was written with her center in mind.

As a former undergrad focused on neuroscience, my thoughts often drift toward mind uploading and the future of intelligent sentient life. I think that the emergence of mind uploading and the intelligence augmentation it enables will release unprecedented potential to improve the condition of life for humanity, especially at the point advanced physics can seamlessly interface biological brains with functional copies in supercomputers. However, ubiquitous radical intelligence augmentation, like any widely distributed technology that results in superempowerment of individuals, has the potential to threaten established centers of power, particularly governments. The ultimate agenda of a nation is its own preservation, ideally to ensure a free and civil society for the governed, and governments will seek to prevent disruptions that could threaten their very existence. The natural protectors of the existing power relations from such threats are the intelligence communities.

While I do not doubt the sincere desire of most of the people in these communities to preserve and advance certain interests for the common good, isolated cases of abuses have been recorded, such as LSD experiments in hippy communities. It is entirely possible in the future that individuals within those communities that are ahead of their time in different ways, who present alternatives to the mainstream lifestyles, will be prone to falling afoul of preservers of the status quo realities (which I acknowledge are in some ways better than historical alternatives). In fiction, people who learn too much secret information often come to unfortunate ends, as may the people they affect through their lives. Moreover, once information has been learned it seems virtually impossible to “unlearn” it (amnesia drugs are the first step toward accomplishing that, though early intervention would be crucial), so one is stuck in whatever new personal reality has realized, perhaps a dystopian one. A question that I arrived at was, “How can we protect people’s liberty and state security in the era of mind uploads, when information is radically free and insight is easier than ever to obtain?” I suggest calling it the “intelligence hazard problem.”

In an earlier article titled, “Uploading for Life Extension Will Be Valid” [4] I argued that as uploaded minds process more like software it should be possible to swap core aspects of the mind, such as episodic and semantic memory and cognitive abilities. I said that continual swapping of such functions would erode the discrete self to the point that individual minds blurred and by implication would be hard to differentiate from one another. On further reflection, I think there is a risk in that approach for most people: the TMSI risk (“too much sensitive information”). TMSI is a condition that could be incurred when the specific information unusually and undesirably constrains one’s options and imposes burdensome weight of responsibility on one’s decisions. Some people are attracted to that kind of insight. To them I say, “More power to you, and please use it wisely.”

I suspect that most people would rather avoid such information, and for them avoidance probably is the wisest choice. Whatever they might do, I suggest that people in such a hypothetical circumstance go with their strengths and consider the welfare of others as well as their own good. The danger of a completely uninhibited mind commune, as I see it, is that the blind may lead the blind into a TMSI situation with unfortunate outcomes. With discrete upload minds that selectively share content, the risk and speed of disastrously harmful information leaks would be lessened, and utility would be maximized by preserving the safety and freedom of those not well-suited to bear knowledge of state secrets, or well-positioned to advance their interests if they did possess such knowledge.

The same utility maximization (and protection) strategy as pursued by groups could be pursued by individuals who upload themselves. It is obvious that uploading will enable routine storage of “back-up copies” that could be instantiated if some sort of accident or virus destroyed the active functional simulation program. However, suppose for instance the upload was investigating something dangerous, like a drug cartel, and the criminals murdered him. If the back-up copy contained the same TMSI content as the destroyed version, it would be instantiated to no avail because it still would be targeted. Likewise, the same would hold true in the case in which an upload diverged into a team of individuals but kept no filters between the clone minds so that they acted more like one individual than closely related individuals with common values thriving in their own respective life paths.

In the latter approach, which I would endorse, if one individual chose to accept sensitive information as part of a leadership position, or accidentally came upon TMSI that constrained her life options, the others would remain unaffected and continue sharing what they have to offer the world, potentially even living longer in dire circumstances (e.g. entanglements with a drug cartel). I will leave aside the topic of melding a copy of one’s simulated body with a copy of a loved one’s simulated body for an “upload child,” but I suppose that may happen as well. Just based on the availability of processing resources, probably at a monetary expense, the number of mind clones an individual could make would be limited. However, in relation to the intelligence hazard problem, it would seem safer to have a few to at most a dozen separate mind clones who acted responsibly than an unlimited number of roving clones exploring the whole possibility space who might cause enough trouble to jeopardize the safety of all the related mind clones since they might be considered too similar in nature to be trusted. In speculative prisoner’s dilemma situations, I can imagine that some types of behavior could be considered a sacrifice for the good of others. A component of the psychological pain of imprisonment or loss of life is the knowledge that whatever one had to offer the world will either not be shared freely or will cease to exist.

How much more likely would a person sacrifice herself if she knew that people much like herself, who had similar capabilities and would make similar contributions, would remain active in the world? It could promise to take some of the sting out of different forms of self-sacrifice for individuals who found themselves entrapped by TMSI with dangerous groups. This would hold especially true for people who believe they have much to offer the world, such as geniuses and bright people (attributes that require wise formative time investments beyond what nootropics will offer), ethical crusaders, artists and creative types, and people with leadership abilities. It is easier to be stoical about such things when, in a clear sense, all hope is not lost for oneself in relation to the world.

We are entering a time in which the world is more united than ever before and because of this dense interconnectivity what happens in one nation usually affects many others. There is growing consensus on representative democracy as the form of government, yet the international community seems likely to remain semi-anarchic due to a variety of factors and some major threats now come from non-state actors. The unprecedented super-empowerment of individuals that uploading promises will alter societal dynamics in fundamental ways. We should begin to assess possible dynamics and prepare for them now. My hunch, based on the preceding analysis, is that it is in the interest of mind uploaders and society in most cases to retain individual identity boundaries with selective sharing, and also for responsible uploads to split into a small number of mind clones with discrete boundaries who live independent lives to maximize utility and avoid TMSI traps, though they may act cooperatively for their mutual benefit.

Addendum: After a couple days of reflecting on this issue, I’ve realized that some threats, like the one I’ve mentioned, are relatively small, whereas other threats, like motor vehicle accidents or the equivalents for an uploaded mind, loom much larger. There are special considerations in the problem I identified that make unlimited simultaneously instantiated mind clones undesirable, as well as a single operational one or ones with constant unfiltered updates from each other. I know that I’ve been disappointed with some aspects of my life, and considering the possibility of minds like mine on alternative trajectories in a multiverse offers a speculative consolation, but any such minds are absent in this trajectory of the possible multiverse. I propose that the main issue uploading as I suggested it be done could solve would be identified as “the multiverse problem,” and the “intelligence hazard problem” (which could be about any group) would just be a special case of threat that would argue against unlimited number of uploads bumbling into disaster for their mind-clones as well as unfiltered continual communication between the mind clones.  I sincerely hope this domain of analysis and similar arguments to mine will be considered by those who first realize and commercialize mind uploading.

Knowing that minds very much like yours, who might even take up your work if you died, exist in the world might bias people to not defect in hypothetical “prisoner’s dilemma” threat situations, enduring a harder or even shorter life as a result of a desire to protect themselves in a way. Different people have different priorities, and that probably won’t change, but with a manyworlds approach to accessible reality all is not lost for oneself in possible situations in which it would be today – which isn’t to say that most people wouldn’t still strive to survive as they could.


Notes

[1] Johnson, Steve. Where good ideas come from: The natural history of innovation. New York: Riverhead Books, 2010. Print. http://www.amazon.com/Where-Good-Ideas-Come-From/dp/1594487715 

[2] Rydland, Gina. Future Extreme Media: For a sustainable future! (n.d.). 4 Sept. 2012. Web.  http://www.futureextrememedia.com

[3] Rydland, Gina. The Virtual Center for the Study of Progressing Technologies. (n.d.). 4 Sept. 2012. Web. http://www.irpt.info/vcspt/index.html

[4] Hyink, Ben. Uploading for Life Extension Will Be Valid. 30 Mar. 2010. Web. 4 Sept. 2012. http://ieet.org/index.php/IEET/more/3856


Ben Hyink was a passionate transhumanist activist and an intern with the IEET. He helped organize and lead the Humanity+ Student Network (H+SN), co-wrote the “Humanity+ Student Leadership Guide," and was the recipient of the 2007 JBS Haldane award for outstanding Transhumanist Student of the year.
Print Email permalink (14) Comments (5166) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I also have been thinking of how people with TMSI could influence others in their cause community to avoid the spread of chaos and community disintegration. To the extent one safely and wisely could do so, one might be able to gently “nudge” others, preferably in ways they wouldn’t recognize, out of dangerous categories and into paths and roles with better outcomes. That said, if you feel confident you have come across a TMSI insight, it generally would be in both your interest and the interest of others to keep the true insight to yourself. If you feel duty-bound to further investigate or report on potential TMSI topics, pursue that path in as responsible and non-chaotic a way as possible. Not only might you be spared crossing over a “point of no return” but those you communicate with might be spared that fate.

Three general principles apply to people with TMSI and those they affect through nudging or outreach:

1. First, protect yourself and your ability to contribute to the cause community if it is possible to do so in a *responsible* way. Different people will have different opinions on what qualifies as a responsible path that they can accept. It probably is in your best interest to find a TMSI community that you like early and stick with it. Learn what you can about your situation, and then do not hesitate in making any important decisions. Excessive TMSI may not help you make a decision and you may have lost what previously were realistic options after passing points of no return. You may not like where you end up.

2. Protect others in your cause community. If it is safe to do so, encourage others to act responsibly (however, realize that you may not know everything the person must face; do not spread rumors). People who don’t know how they are being harmed may not be harmed as much as they would be if they had your TMSI. You cannot save the whole world, but work in the interest of people in your cause community and TMSI community.

3. Last, protect the kind of people likely to be attracted to your cause community and predisposed to support it if you can. This involves indirect efforts to protect the long-term continued existence and growth of the cause community. Again, you cannot save the whole world and some people are just inclined to be a certain way. It *may* be better overall if you do not excessively nudge them (you wouldn’t want to make them suspicious) or share your TMSI.

Desired qualities of pro-social nudging (in general): indirect influence with nothing explicitly stated and nothing clearly traceable to TMSI.

We should encourage all cause community members to invest time in developing their own capabilities to provide higher-level contributions and pursue paid work they will enjoy that may advance community interests. Examples of contributions include donating money, influencing the public and influencing social influencers, advancing relevant technology and science, etc. We can encourage targeting of low-hanging fruit along the way (e.g. peer recruitment of students, who are among the most impressionable of all audiences), but that always should be combined with personal development and growth that can be leveraged to achieve aims. Examples of this would include marketable technical skills, a relevant scientific knowledge base, technology-utilizing artistic skills, conceptual economy business backgrounds (e.g. social, science, and tech start-ups), a degree of philosophical and bioethical familiarity – especially relating to our agendas – and principled and cooperative (learned) leadership skills and know-how. The more capable a person happens to be in such areas the more resilient she is likely to be in a TMSI situation. Significant capabilities probably would offer better real options.

Taking the above advice into account, I think we can protect and grow our cause communities overall for an indefinitely long time and achieve great realistic ends through them. That would be a core value for me, and the reason I wrote this article. You might also take into account that I am somewhat paranoid by nature.

Related theory:
https://en.wikipedia.org/wiki/Stag_hunt
https://en.wikipedia.org/wiki/Best_response#Coordination_games

It is in the rational self-interest of “knowers” in a community to cooperate I its interest if not completely unite in all respects. I think my priorities are pretty good, but I would be interested in more sophisticated analyses.





Ben..

This is interesting, but do you not think that “real transparency” is in fact the “real” protection against TMSI?

You paint a picture of a compliant social order with Self-protective individuals, clones and sub-groups, communities, which very much mirrors the status quo, but I would say you are the missing the real possibilities for an “online global interconnected community” incorporating an oracle of information sharing - here no individual or group would be at risk as transparency would immediately eliminate risk to individuals?

Although crime and psychopathy, neuroses, may still exist in a future scenario? All “past” secrecies and attrocities uncovered will need to be forgiven, with “forgiveness” promoting great strides towards progressive ethics?





The need for “Self” preservation and of “individual” identity would necessarily require restrictions to the amount of information available/shared from an “online global collective” - How else could an artist for example, possibly experience or express any creativity when faced with such volumes of knowledge and flood of ideas from other artists?

Yet the necessary filtering of information does not negate the protections afforded by “real transparency”?

The real protection for an uploaded mind or clone coming into conflict with “drug cartels”, would be from the knowledge that is instantly, shared and uploaded to the wider community? (ps. Drug cartels would no longer exist in this future scenario, as there would be no need for secrecy nor black market dealing anyhow?) Drugs will not be useful where uploaded minds are concerned?





Comment 1:

I think that aside from keeping some things secure, living one’s life in a transparent way might be a good approach to take, especially if you plan on going into politics or leadership.

I think that there probably always will be secrets in the world, especially ones that veil activities that people doing them intend to keep concealed. Massive accumulated infrastructure that is not going away any time soon is intended to hide tech development or espionage activities, and non-state actors are in arms races to avoid detection by authorities.

The kind of world you describe may come to pass someday, but probably not all at once.

Comment 2:

I agree that some degree of isolation is necessary for divergence from a collective mainstream. Then again, some artists get paid big bucks to provide mainstream audiences with exactly the kind of sounds they expect to hear. Unmet consumer desires can be satisfied without relying on unique creativity, not that I think creativity is a bad thing.

You are correct that benefits of information filtering do not contradict advantages of transparency. Yet I think at least political authority and the ability of the state to protect individual liberty would be undermined by complete transparency. The ultimate aim of a nation state is its continued survival. The ultimate aim of black marketeers is premium profitability of their scarce commodity, profitability which would not exist in the absence of prohibition or restriction of access. The state will pass laws restricting transparency and access to items and materials deemed harmful in the process of defining its domain of authority, which is power recognized as legitimate, and therefore enforced. Black marketeers do not want to lose the scarce nature of their commodity so in ways available they will pursue their interest of maintaining the illegality of the commodity and their efficacy in supplying it to paying customers, all via means they will endeavor to maintain secret almost at all costs unless the specific threats arising from transparency of their activities to observers can be negated (e.g. accepted bribes). Both the Law and criminals have rational bases of interest in working against transparency, or operating beneath the radar of superficial transparency. It is an epistemological question whether one actually has hit bedrock reality regarding a domain of apparently complete transparency – What can we completely know, and how? In a way you can only be practically certain when the belief is undermined or “falsified” and only be pragmatically warranted in giving credence to a belief you can justify well that complete transparency has been achieved. Not to sound unduly conspiratorial…there are many possible forms of secrets not fully divulged, e.g. ad hoc approaches to concealing various specific minor things.

I suspect uploaded minds will pay for high quality, even dramatically enhanced, drug trip experiences sans long-term brain damage.





Technological advancement towards mind-uploading and subsequent mind-clones implies also merging of minds and connection to a global collective? In this future scenario, and with the application of transparency, the threat from malicious groups and individuals will be overcome ultimately by the collective itself, through the connection to the collective? And how else can the collective, (including clones), be threatened but by way of connection?

Nation state Governments and their executive admin, secret services, are in the game of convincing us we “need them” for exactly the reasons you describe, and this supports the symbiotic relationship with crime you suggest. But do we really need either?

Uploading can possibly render the need for biological drugs redundant, in preference to induced and shared psychedelia? Although I do not rule out clone real-world experiences as desirable.





Apologies for the dual posts: my mobile permits only limited text..

I agree with your view regarding levels of transparency and validity of knowledge, yet as long as any connected data, (knowledge, intelligence), resides indefinitely, the whole collective need not be party to it at once and together, as long as enough to protect each other are? Which is an analogy to what we experience now through robustness of the web/internet and resurgence of data/knowledge from “individuals”, (blogs, tweets, opinion), via intelligent search engine, (google).

In fact, even without progress towards uploading, the “online global interconnected human collective” can progress towards reduction in govt executive powers through the realisation of real-time crowd-sourcing and increased democracy? Aiming to eliminate suffering, crime, terrorism and even war? All these things the executive pay “lip-service”, but ultimately obstuct?

I believe the global collective will lay the foundations for progress?





I hope you guys aren’t confusing uploading your minds onto computers with transferring your minds.  I say this because when transhumanists say uploading they usually mean installing their minds into some kind of hardware, in other words transferring.  This is misleading because when you upload something you are not transferring it from one place to another, you are just making an relatively accurate copy.  The two should not be confused or the first people who volunteer to be “upload” will not find themselves in a new body but a close copy of their minds instead.





CygnusX1, I think you make very well-grounded points about the likeliest futures with regard to uploading.  In fact, your analysis has led me to revise some of the basic premises of my model. Thank you for sharing your perspective.

Christian, you are correct that traditionally conceived uploading will just be copying and extending your information theoretic conscious identity into a computer substrate. There also are gradual approaches utilizing brain-AI-computer interfaces. I discussed one such approach in my IEET article cited at the bottom of this IEET article, as well as argued that the apparent material continuity we experience as an operational biological brain is fairly to completely discontinuous anyway.

Again, with regard to surviving in a TMSI state, the question is can you live in a way in which you will be able to live with yourself, and deal with future consequences. I’m not telling anyone how to answer that question of personal life philosophy. Any such scenario is premised on having sensitive knowledge that weighs you down with burdensome responsibility. If you avoid such situations, you may avoid ever having to deal with such burdens. Take care of your life.





Don’t get me wrong, I’m ambivalent about CygnusX1’s vision, but a central premise of my article was mistaken and I think readers should know where uploading inevitably is leading if they decide to go for it. I’m pretty sure people still will be working hard in a collective consciousness - who would carry the weight for a bunch of slackers?

If you choose to upload, you’re probably eventually going to merge with a global superorganism. Know what you are getting into. smile





Back to whatever real TMSI issues there might be along the way to what I’m inclined to think is humanity’s destiny just based on an intuitive hunch or gestalt impression, the most important consideration probably would be the prevention of chaos from TMSI leaks, whether accurate or inaccurate, that cause people to panic and endanger others.

Loose lips probably could sink small movements as well as ships. Panic, and poor judgment leading to it, should be discouraged, with early deflection, containment, or other intervention. People should be strongly encouraged to act reasonably and responsibly.





It might be dangerous for the person being contained to offer any explanatory information, but if they are acting out and it is safe for a person to do, an attempt or two to calmly reason with the person to change his or her behavior would be ideal.

People who feel outraged may act out to the detriment of themselves and others, and people with TMSI may not have a full appreciation of the other person’s experience, given potentially radically different world models. *Friendly*, early, and importantly non-disclosing intervention may be worth a shot, especially if people care about the person.





From an undisclosed location:

On the chance that some may have come to wrong conclusions about the inspiration for my article, it is based on various and sundry fictional sources. Which ones exactly…I’m not at liberty to say.

However, I think fiction can explore possibility spaces that are not accessible to most people. I think that given certain premises that are not entirely unreasonable to suspect, my analyses are cogent.

All other faults and shortcomings aside, I try to be a rational person. I hope others will consider and maybe extend and develop this line of speculative reasoning.

Cheers!





Along the same line of reasoning though, I suppose one should stop inquiring into a topic if life starts to get strange. Game theory work seems relatively safe.

Above all else, my aim is to help prevent people from getting hurt.





Ego aside, I will admit that I am a very paranoid individual, probably to an irrational extent. Paranoia *may* be helpful if it inclines you to flee from danger.

I suppose that getting too wrapped up in these sorts of speculations might bias toward mental instability. I wouldn’t want to have that effect on anyone either.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Rise of the Robot

Previous entry: If Cows Were Time Travelers - The BioPolitics of Animal Consciousness

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376