The male gaze at Clarissa results in her ego splitand her conflicting inner and public self. While sheenjoys the pleasure of being looked as the one ofthe most charming, kind, loveliest ladies, she knowsthere is a price to pay for it. She is conscious that itis utterly silly and idiotic for her to do things, not forthe sake of things themselves, or for being herself,but “to make people think this or think that” (Woolf2003). As Peter comments on Clarissa in her middleage, “she was worldly; cared too much for rank andsociety and getting on in the world” (Woolf 2003).It is for getting on in the world that she has to keepup with the rest of people in her class, to performagainst her will under the male gaze. Superficially,she is brightly happy, loving life passionately; how-ever, deep inside she wants so much to escape fromthis desperate life of being Mrs. Richard DallowaythatforonemomentshenearlythinksofaskingPeterto take her away (Woolf 2003).Through characterization of Mrs. Dalloway anddepiction of her inner world, Woolf exposes thepassivity of women’s life in a patriarchal England.With man dominating over money, power, and pub-lic voice, his influence and desire have been unmis-takably taken in by woman like Clarissa. Male gazegives order, dictates her behavior and mind, teachesher to see herself through man’s eye.
As it can be seen, G. Kelsen’s “Pure Theory of Law”is most capable of theoretically substantiating theimposition of certain rights on robots, which wouldlegally assume the status of an electronic / mechan-ical person. It should be taken into account that theterm “electronic person” has already been adopted byboth international and most national institutions.Based on the approach of the theory of “pure law”, itis concluded that an electronic person can be inter-preted as a personified unity of legal norms thatoblige and authorize artificial intelligence with thecriteria of “rationality”.The study of the problems of legal capacity of elec-tronic persons confirms the need to form a funda-mentally new toolkit for legal regulation, which isassociated with the specifics of electronic persons,characterized primarily by the difficulties of localiz-ing their legally significant behavior.
As a result of human activity and created to facili-tate its activities, a robot or an electronic / mechani-cal entity performs certain functions specified by itsdevelopers. Like any functioning mechanism, thisentity can fulfill or violate the duties assigned to him.The natural question is arise, who will be responsiblefor possible errors in the functioning of this robot?The choice for the answer is not big: it must eitherbe the electronic entity itself or it must be an artificialintelligence developer.This problem was actualized as a result of an increasein the autonomy of artificial intelligence, as well asan increase in the number of deaths as a result of“decision-making” by such intelligence. For exam-ple, a traffic accident happened in the United States.So, as a result of an incorrect assessment of the situ-ation by the Tesla self-drive vehicle, when a car col-lided with a truck and the driver died, who did nothave time to take control.In this regard, let us recall the words of the founderof the Tesla car, I. Mask, who said that neither roadaccidents, nor plane crashes, nor lack of drugs orpoor-quality food can compare in the level of dangerwith the development of artificial intelligence, andcalled for the introduction of a state control over theimplementation of appropriate technologies .In the requirements for the design, development andproduction of this class of industrial robots, in partic-ular, it is stated that during the development (design)of a machine and (or) equipment, possible typesof danger must be identified at all stages of thelife cycle . For example, at present in Rus-sia, the responsibility for the illegal consequences ofthe functioning of industrial robots is borne by theirowners, manufacturers or operators.
One of the first countries to apply legal regula-tion in the field of artificial intelligence, as wellas its carriers - robots or electronic / mechanicalentity, was South Korea, where in 2008 the Law onSmart Robots was adopted (42) , in which they arelegally identified in as mechanical devices that per-ceive the environment, recognize the circumstancesin which they function, and endowed with the abilityto move independently. The document only definesthe development of robotics, focusing on the devel-opment issues of robotics, including measures ofstate support, but does not cover the entire range ofproblems (discourse) set out above.The British, in contrast to the Koreans, began to dis-cuss not only the practical benefits of robots, but alsothe ethical issues associated with the use of artificialintelligence. As reported by “Vesti” (Ukraine), in thespring of 2016, the British Standards Institute (BSI)published a “Guide to the ethical design and appli-cation of robots”.
However, the traditional argumentation remains inforce, according to which a machine will remain amachine, it will never surpass a human, it will neverreceive (should not receive) any rights, and evenmore so the rights inherent in a human. Their posi-tion is unchanged - they are ready to regard robotsin the legal sense as nothing more than slaves, butcertainly not “partners” of people. To thestrength of such argumentation is added the fear ofits supporters that respect for the fundamental rightsof robots will doom humanity, following the logicof development, to extinction as a result of .The granting of rights to robots in the form of assign-ing them the status of a legal entity in the status of an“electronic entity” also raises the problem of differ-entiating these rights among the robots themselves.As L.Wein notes again in his article, along with thegranting of rights, different robots will receive a dif-ferent volume of these rights, forming a hierarchy oflegal statuses of robots.