[governance] Could the U.S. shut down the internet?

JFC Morfin jefsey at jefsey.com
Sun Feb 6 16:14:59 EST 2011


At 17:13 05/02/2011, Avri Doria wrote:
>Hi,
>
>I pretty much agree with this analysis.
>
>The only additional thing I would say, and I think the email gets to 
>a similar place by the the last paragraph,  is that even though the 
>Internet can be easily taken down by governments, there is a 
>resilience in the Internet that by using old techniques and 
>communications technologies, and new technology the communications 
>links can be reestablished in time.
>
>But I think there is a good warning that we should heed in this note 
>and that we should start preparing now for the next regime that 
>decides to take the network down, that we need to support those who 
>are being prosecuted now for their content and we should work on the 
>diversification and distribution of control and governance.

Avri, Louis,

Old techniques certainly offer parts of the solution that we need, 
but they did not apply to local and international free (cost and 
control) coverage. They do not offer routing solutions in 
asynchronous mode. The typical situation is: no more Internet 
bandwidth, polluted radio and WiMAX, possible but degraded Wi-Fi, and 
costly and possibly taped or interrupted phone lines.

There are also absolute needs for encryption (a way to promote 
IPsec?) and FEC (Forwarding Error Correction). This could be 
investigated through current IETF propositions (Fred Baker, Margaret 
Wasserman) for NAT66 and, therefore, the dissemination of the chips 
at low cost. Possibly through a smart plug sold as a security: how to 
contact your ISP when the connection does not work.

Other problems are the concatenation of people's systems into a 
varying network (I can access my neighbor or someone in Malaysia, 
etc. like in short wave networks), addressing (location + ID?), 
transmission control and ACKs, and routing. Naming is OK. Actually, 
after centralized, decentralized, and distributed full duplex 
networks we face a quantified intricate network model. Interesting.

Obviously in that kind of system we will not obtain end to end high 
speed! However, Twitter has shown what can be achieved with 140 
bytes. Another issue is certainly the classification of information 
to help people figure out a situation, and to chain texts. Coded and 
semantic compression are also something needed to transfer language 
independent situation reports (an example that we all know is: "404") 
and pictures.

This is where, architecturally, my IETF campaigns can pay off in 
ensuring good exploration coherence. The current status of the world 
digital system that is emerging from 40 years of experience, partly 
diluted by merchants as Louis and IAB say, has two identified and 
possibly competing sequences that we should try to make complementary 
by any means:

1. The one that we know and that we do not want, what Louis calls a 
regression : the proprietary or monopolistic, and now politically 
enforced, merchants/server/client sequence. We are warned: the 
Rojadirecta.org case is actually a casus belli with Spain. The US is 
tasting the international mood while they are still "negotiating" via 
ACTA and with ICANN.

2. The one that we identified in WG/IDNABis and that the whole 
digital ecosystem community still has to digest, that will most 
probably be opposed by many, mostly for egoand political reasons, 
because it uncouples servers and users in introducing an intelligent 
use interface at the network fringe, between the inside 
infrastructure and the outside users. Users become protected from the 
direct market and government real (as we recently learned) influence 
on servers and ISPs.

In this sequence, we have (let us imagine fruit with the kernel 
[internal networks], pulp [IUI], and skin [UI to people and applications]):

1. people and services (rather than servers) peering on the outside - 
they all do exist; this is the people centric WSIS information society.

2. IUI on the networks periphery - Intelligent Use Interface. It has 
to provide a uniform networking experience to users/user-applications 
- the concept has emerged from IDNA2008 RFCs, etc.

3. the various central networking systems the use of which the IUI 
must make as transparent as possible, even in degraded  situations. 
These systems range from experimentation (they do not exist) of the 
Internet of the future and high-speed Internet (it exists but may be 
shut down), down to "old techniques" (they do exist) being revived, 
adapted, and pampered towards a people's.net "emergency back-up +", 
which is the need we are meeting.

One cannot address that kind a major, additional foundation in a few 
days, but one can document the findings on the matters and publish an 
IUCG I_D on the world digital ecosystem architectural principles and 
model as they are now emerging from 40 years of experience. To 
havebasic  guidelines. The Internet principles are of constant change 
(RFC 1958), simplicity (RFC 3439), and subsidiarity (IDNA2008), and 
the RFC 3935 gives some hints in defining additional Internet 
architectural options through the IETF mission. Most probably these 
principles are to build upon.

What is interesting is that, due to the imposed constraints, 
experimentation costs should be very low.


jfc





____________________________________________________________
You received this message as a subscriber on the list:
     governance at lists.cpsr.org
To be removed from the list, visit:
     http://www.igcaucus.org/unsubscribing

For all other list information and functions, see:
     http://lists.cpsr.org/lists/info/governance
To edit your profile and to find the IGC's charter, see:
     http://www.igcaucus.org/

Translate this email: http://translate.google.com/translate_t



More information about the Governance mailing list