Maria del Mar Griera
Carlota Rodríguez Ruiz
Sociology of Religion Research Group, ISOR-UAB
Some decades ago it looked as though religion was destined to become a residual practice in 21st century Catalonia. This was not a hasty verdict. Surveys showed that the Catholic Church was losing believers and worshippers at an alarming rate, and it was becoming an increasingly discredited institution in the Catalan context. Whereas in 1980 people who thought of themselves as Catholic made up nearly 80% of the population, by 2015 this figure was closer to 52%. Plus, out of this number, less than half said they were non-practising, meaning that although they were people who described themselves as Catholic they hardly ever attended mass or other forms of worship.
However, not all religious faiths are losing followers in our country, quite the opposite. In the last few decades, religious minorities have been gaining ground and nowadays, more than 15% of Catalans say they are members of a religious minority, with Islam, Protestantism, Buddhism and Orthodox Christianity being the faiths that attract the largest numbers of people (Baròmetre, 2014). It is estimated that, at present, there are more than 1,360 places of religious minority worship in Catalonia (ISOR, 2014). Evangelical churches, Sikh Gurdwaras, Buddhist monasteries, Orthodox churches and Hindu communities are just some of the religious centres that have been set up in recent years and that have contributed to diversifying the religious map of Catalonia. Nevertheless, despite this remarkable growth, most places of worship remain tucked away in the urban landscape, camouflaged amongst industrial warehouses, commercial premises or in spaces lent temporarily by the administration or by social organisations. In Catalonia, the architectural invisibility of places of worship stands in contrast with the increase in all kinds of religious activities in the streets, including processions, religious festivals, open-air prayers and concerts of religious music that year after year are becoming more visible in the public domain.
In 2015, the ISOR sociology research group embarked on a project to explore the growth of this type of activity in the metropolitan area of Barcelona. The project was entitled “Religious Expressions in Urban Space. Negotiations, tensions and opportunities surrounding the visibility of religious diversity in the Catalan public domain”. (“Expressions Religioses a l’Espai Urbà. Negociacions, tensions i oportunitats entorn la visibilitat de la diversitat religiosa a l’espai públic català”) and focuses on analysing the (in)visibility of activities carried out, the bureaucratic and political processes that communities have to go through to hold these activities and the negotiations that take place with the local community and the audience they target. The research was designed following a case study methodology and five studies were completed, each one focusing on a single religious faith: Catholicism, Islam, Protestantism, Buddhism and Sikhism.
‘Why do religious communities take to the streets?’
Celebrating and/or commemorating significant dates in a community’s religious calendar is one of main reasons for organising activities in the public space. An example of this is the Shiite Muslim community in Barcelona, which has been holding the Ashura procession in the Rivera district every year since 2006. The aim is to publicly celebrate the death of Hussein Ibn Ali, grandson of the prophet Mohammed, and remember his suffering simultaneously all over the world. Memories of death also colour the Catholic procession organised by the brotherhood of ‘Germandat del Gran Poder i l’Esperança Macarena’ every Good Friday and that goes along the Ramblas in Barcelona as well as through the city’s historic quarter. The commemoration serves to collectively remember the basic origins of the faith and to reactivate emotional bonds with the community of believers. Both in the Ashura and in the Catholic procession, the ritualised staging of pain is a key element that transports the participants emotionally. In the case of the Ashura the ‘matam’ ritual structures the pace of the procession; the recitation of rhythmic chants that rise and fall in volume in a loop, while participants beat their chests. In the Catholic procession, the passage of the holy images of Christ and the Virgin of the Macarena are what structure and stage the ritual. The sight of the images unlocks the emotions of the people taking part and triggers a public ovation, as people with outstretched hands literally try to touch the images, amidst cries of “Beautiful, beautiful, you’re the most beautiful! Long live the Virgin of the Macarena!” (field material), demonstrating the complementarity between images, ritual high spirits and emotions.
The reason for taking to the street is not always linked to the expression of pain. For example, every year, the Sikh community holds the Guru Nanak festival in Barcelona and all over the world, to commemorate the birth of the founder of Sikhism. This festivity remembers the joy of the religion’s origins and involves men, women and children from the community, who walk in procession through the streets in the centre of Barcelona. The procession ends with a community meal to which everyone is invited and that is intended to symbolise the hospitality of Sikhism. In mid-April, the Sikh community also holds Baisakhi or the harvest festival, which celebrates the founding of Khalsa, the institution that baptised Sikhs all belong to. As one of the community members explains: “Baisakhi is the baptism festival, commemorating the creation of Khalsa. Khalsa means a pure Sikh, when a Sikh decides they want to be Khalsa, we hold this festival. (…) From this moment onwards they have to follow a series of rules like not cutting their hair or wearing a wooden comb…”.
Sometimes the reason why communities take to the streets is also because they want to complete a spiritual or religious ritual. The collective baptisms that some Protestant churches hold on Barceloneta beach also show this desire to take ritual into the public domain, away from the centre of worship.
All the activities we have described up to this point are largely aimed at the members of religious communities. In contrast, some activities are intended to show the faith of those involved to the heterogeneous audience that gathers in city streets and squares. We are referring, for example, to the so-called “evangelical campaigns” organised by Protestant communities in parks and squares, in the hope of attracting new followers, or handing out leaflets, brochures and magazines to publicise their faith. However, organisers of this kind of event frequently come up against reluctance from public authorities, who disapprove of the use of public space for what could be regarded as religious canvassing. The boundary between publicising one’s own faith and what is regarded in a derogatory sense as intrusive religious crusading is very fragile and often causes controversy. What some see as simply being part of religious freedom and the right to express oneself freely is regarded by others as a proselytizing act that should be restricted in our society. The problem is that the line between both these views is often difficult to determine using objective criteria and it is then that social and cultural biases come into play, tending to prejudice communities that are little known, stigmatised or recently established.
The latest way of using public space by religious communities is the protest demonstration. The desire to make unrest visible is what pushes them into organising an event outside the centre of worship. In a global and interconnected world like ours, these protests are frequently held in response to events happening far away from our borders. This is the case, for example, of the protest held by Sikhs in October 2015 in the Rambla del Raval to show their unrest at the attacks carried out against the holy book –’Sri Guru Granth Sahib’– in their home land, the Punjab in India. We could also point to protests held by the Muslim community on issues such as the controversy about the publication of the caricatures of the prophet Mohammed and other similar matters. This kind of event reinforces transnational bonds and the creation of a community conscience in the diaspora.
‘The importance of place: social recognition and the public space’
Being able to make oneself seen, being visible to other citizens is one of the growing demands of religious communities. They lay claim to their “right to the city”. The spokesperson for the Sikh community told us that for them it is very important to go along the Rambla. They know it is difficult because the area is very busy with traffic but, as they explained, “the community has this wish, to be able to walk along the Ramblas, so that people can see and meet us”. They also point to an unfair situation that allows the Catholic procession to parade along the Ramblas, with permission to cross the city’s most important roads. In the spirit of goodwill, they say that they understand Catholicism has a long tradition in the city but they also point out that being more recent arrivals should not make them second-class citizens. After much insistence, the Sikhs have managed to obtain permission for their procession to cross the Ramblas, although they have not been authorised to actually take it along this street. For the Shiite Muslim community it is also important to be able to hold their festival in one of the city’s iconic spots, the Arc de Triomf. It is a symbolic issue and part of their desire to be recognised. In this case, the community has expressed its strong disagreement with the proposal that they move the Ashura procession to a closed site, like a pavilion, or to somewhere on the outskirts of the city. As citizens of Barcelona they demand to be able to make their religious and cultural beliefs visible and not have to hide away. There is also another reason for their refusal to move to a peripheral location: they want to be acknowledged on global networks, like Twitter, Facebook and Instagram, as citizens of Barcelona, and they need an iconic landmark to be in the photos so the city can be easily identifiable for people watching from faraway locations. They are citizens of Barcelona, but they are also travellers in a globalised world with networks of friends, family and acquaintances across the planet, and they want them all to be part of this gathering.
The separation between the sacred and the profane space is an issue common to most religions. However, the border between one dimension and the other is often blurred and expression emerge that take place on the periphery. Religious activities out on the street are frequently characterised by having a hybrid relationship with the sacred and the profane space: they are expressions of sacredness produced in a space defined as profane. In our country, historically speaking, the majority of religious expressions on the street were Catholic and part of the Catholic Church’s public ritual. But times have changed and the religious landscape has been transformed over recent decades. Religious diversity has become very important and minority groups demand their right to take to the streets and make themselves visible to other citizens.
 The project was funded by AGAUR and the Religious Affairs Department at the Generalitat de Catalunya government. The following researchers took part: Avi Astor, Rafael Cazarín, Anna Clot, Miquel Fernandez, María Forteza, Mar Griera, Antonio Montañes, Carlota Rodríguez and Wilson Muñoz.
 In all cases, one or two rituals were chosen out of all those organised in the street by religious communities, and a qualitative methodology was followed consisting of ethnographic observation, collection of audio-visual material and semi-structured interviews with various people involved in the community events.
 Self-flagellation is practised in some other countries but this is not the case in Barcelona.
Joan F. Mira
Institut d’Estudis Catalans
Before we talk about the “concept of culture” or about any attempt to define such an over-used term, we should remind ourselves of a few self-evident truths, like this one: society, the majority of the more or less enlightened population (“cultured” people in all countries, and in each country…) only apply the term culture – in its “elevated” sense, in the dignified, superior sense – to that which is presented and promoted with this attribute by those who have the power to do so; in other words, by the political, institutional, social, “media” or academic power, or whoever else that may be. This is how it is, it’s undeniable, but it needs to be repeated from time to time, because we often forget the simplest facts, especially when they don’t lend themselves to theoretical brilliance. Culture theorists, on the other hand, usually examine their colleagues’ books or papers in great detail, extracting even more theory from them (more contemplation and more ‘spectacle’, which is what ‘theory’ also means in Greek) and they tend to pay little attention to the trivial and very unassuming normal function of people and words. However, we should really be following the advice of Sir Francis Bacon, founding father of the empirical method. He recommended arriving at the knowledge of form or essence by starting with the facts and by means of induction: observing, checking, comparing, and then finally, if possible, reaching some sort of conclusion and definition. It could be, in this case or field we’re working in, that culture doesn’t have an “essence” or a form of its own, but in any case, if it does, it isn’t a substance nor does it have ‘a priori’ any identifiable and definable attributes, it is that which functions socially as “culture” and that receives this name and this recognition. An extremely sad conclusion, empty of content, redundant and perfectly useless, probably because there is no possible definition. Not, therefore, any independent and “objective” idea, let’s say, of culture, but instead an often scattered set of facts and realities that circulate and operate more or less effectively. In the same way that “intellectuals” are a normally disperse set of people who circulate and function as such, who are seen or identified by others, or who identify themselves as such. But if we ask in a survey, “what is an intellectual?” or the person concerned, “are you an intellectual (and why)?” they may not know how to answer, or the answers may be quite strange… They are hazy voices, with no substance. So expressions like “contemporary culture is…”, “the values of modern culture are…”, “today’s society lives in a culture that…”, “cultural trends in the late 20th century and the early 21st century are…”, and so many others in the same vein, are little more than a ‘flatus vocis’. But there’s a lot of theory and many texts, and very well qualified ones at that, on “culture, etc., etc.”, that without these puffed up voices would deflate until there was nothing, just an empty appearance, a coloured balloon.
Let’s remember. Who thought, at the start of the 20th century, that machines, locomotives or factories ‘were’ culture? Nobody. Well, soon afterwards, Marinetti and the Futurists thought it, but in a very particular way: they thought that they were art or the subject of art, the powerful art of the industrial future. In any case, industrial infrastructure a century ago was not “culture”, and now ‘those same’ factories, locomotives and machines are ‘cultural objects’ in museums of industrial archaeology, they are the topic of discussions and conferences, of beautifully illustrated books and major exhibitions. All organised by departments and institutions that administer ‘culture’. The objects are the same ones, but whereas before they functioned simply as industrial or transport objects, they now function as cultural expressions, presented with the added value of history and aesthetics, and therefore worthy of a new form of appreciation and contemplation. This is the issue: they’re presented (by intellectuals, those in the know, experts, specialists, etc.) as worthy of intellectual respect, and that means they’re already “high” culture and their agents are highly respected. Up until the 18th century, musicians were not “high” or “respectable” for example, and until the second half of the 20th century, neither were cooks, dressmakers and hairdressers, etc.; now they are personalities who people listen to, high culture, members of the “intellectual class”. They do the same things but they aren’t perceived or seen in the same light, now they’re part of the “high” level of culture (as well as in terms of money, social presence and contact with “power”), now their words, actions, products and ideas all exercise public influence via the media, etc.; they’re intellectuals! Aren’t they? And why not? Let’s look at idols…
Clearly, for Sir Francis Bacon, Earl of Verulam, the “idols of the tribe”, the “idols of the cave”, the “idols of the marketplace” and the “idols of the theatre”, which he criticises in ‘Novum organum’, were false images and false forms of perception, preventing us from getting to the reality of things as they really are. But precisely in this field of culture things ‘are not what they are’, but what they appear or are represented to be: their cultural ‘reality’ is their presentation, or representation, or image, or appearance. So, their recognised value is as solid or as shifting as shares on the stock exchange or currency converters, it depends on credit, on confidence, on institutional support, maybe on speculation, perhaps on expert opinion (another plague, pest or epidemic – another idol). And this, evidently, doesn’t prevent, but rather enables, monumental frauds occurring from time to time (three quarters, no less, of so-called “contemporary art”, including a considerable proportion of exhibits in the most prestigious galleries of museums in this sphere, are a perfect fraud, I have absolutely no doubt about it, fraud with a multitude of accomplices and beneficiaries; the other quarter is probably a solid and well thought out investment. In terms of examples, anyone interested can find them in practically all the cities in Europe). This also doesn’t prevent what often happens on the stock exchange or in publishing fashions, that, to use physically noisy terms, we can go from ‘boom’ to ‘bust’, or from explosion to fart. We already know that the visible use of idols is to attract, unite and congregate believers around images and representations familiar to the community, and thereby – with worship, veneration, ritual and in short, faith – consolidate the cohesion we normally call “social”. This is why the people of Israel had so many problems remaining united over the centuries, because Yahweh insisted that they should be tribes without idols (and if they did not completely disintegrate it was thanks to the Ark, the Law and the Temple, which all played an equivalent role). It’s a role that Imperial Rome was very clear about with the cult of images of the Emperor, or Christian Europe with saints and holy mothers of God. Now we think, what would a contemporary country do with no “cultural” works and names commonly or mainly recognised as valuable role models? To whom would it attribute this ‘worship of culture’ – the worship of universal gods and of the particular gods of each country – that has become necessary in every human society that regards itself as modern and more or less well run? Whether idols are divinity itself or simply a representation of it is largely irrelevant, the same as whether this divinity is “true” or “false”: what counts is the extent of public devotion, the impact and effectiveness of the rituals and the strength of faith.
The most visible result is that, in the same way that (five centuries or two centuries ago or one, or in many cases and circumstances more than half a century) “the people” of any country we would call western, ‘lived’ in an atmosphere “loaded with religion”, surrounded by religion, breathing religion, saturated by religion, now breathe culture; now we’re saturated and surrounded by culture, we ‘live in’ “culture”, whether we search for it and whether we like it or not. I mean that the presence of what we usually call “culture” (whatever its content…) is so abundant, dense, vague, every day and penetrating – even publicly imposed and you might say compulsory – as it was “before” the presence of what we normally call “religion”. With its temples, hierarchies and ceremonies, with public and private worship, and with the occupation of the mental and emotional space of both individuals and groups. The idols of culture (and especially the people idolised) are not only obstacles to knowledge, as Francis Bacon would say, they are not only images and representations, they also frequently appear to be divinities themselves – in human form, or living in eternal glory – worthy of the most diverse forms of worship, worthy of idolatry. Who is, then, the brave one – the heretic, the excommunicated – who practises a healthy and moderate form of iconoclasm and dares to say in public that, for example, that this building by this famous architect is a pretentious piece of nonsense and out of place, that most of the work by this celebrated and extremely expensive painter is complete rubbish, or that many texts by this great writer are actually meaningless and of no interest? When we think about doing it, we can think of a good many reasons but we might perhaps lack the courage…
Nancy C. Dorian
Bryn Mawr College, Pennsylvania
In the summer of 2015, a Canadian journalist writing for the Calgary Herald reviewed the very considerable measures that the Canadian government had taken in recent years to support maintenance and revitalization of First Nations languages in that country. There are about 60 aboriginal languages at various degrees of risk in the country, most of them very seriously at risk, and First Nations leaders continue to seek funding for such things as language institutes, aboriginal language programs for students and teachers, immersion schooling, dictionaries, online tutoring, and other supportive measures. Naomi Lakritz, the journalist, points out that government-funded Aboriginal Head Start pre-schooling has been available since 1998 and costs $59 million (Canadian) per year. An initiative called “First Voices” that provides tutors, interactive dictionaries, and online language labs receives part of its funding from the government’s Department of Canadian Heritage. Five years prior to publication of her article, Lakritz reports, Ottowa quadrupled its funding for preservation efforts in British Columbia alone, supporting instructional materials and youth language camps.
Lakritz is by no means hostile to language preservation and revitalization initiatives. “Languages are precious and they deserve to survive”, she writes, “for they represent the unique and irreplaceable way their speakers perceive and think about the world”. But at the close of her article, after recounting the many streams of government support for aboriginal languages in Canada she asks, “How can this not be enough? If languages are dying out and remaining unlearned despite the millions of dollars spent annually on teaching and preserving them, the problem is not a lack of multimillion dollar initiatives. At some point, people have to take advantage of the opportunities offered them. If they won’t, that’s not something more money and more programs can fix.”
This is an understandable position, and Lakritz is not the only one to take it. Journalists in Scotland have raised the same question about government expenditure on behalf of Scottish Gaelic, for example. A large part of the answer – the major part – is that by the time governments such as Canada’s and Scotland’s have become sympathetic to minority language speakers’ hopes for maintaining or revitalizing their languages, it’s very late in the day. The damage done by previous distinctly unsympathetic governments and by what is often centuries of societal and institutional mistreatment has been so extensive that minority-language populations have little left of their linguistic heritage (often a small number of elderly speakers) and in many cases a painfully understandable reluctance to re-acquire a language that was deliberately stamped out of their parents’ and grandparents’ lives. The worst of these stories are by now well known, though no less horrific for that: North American Indian and Australian Aboriginal children removed from their families and sent to boarding schools where they were punished for speaking their own languages and subjected to harsh assimilationist pressures. Even in countries where treatment was less overtly and oppressively cruel, membership in a long-standing minority group such as the Sámi in the Nordic world or the Arvanites (Albanian speakers) in Greece meant social bias and disadvantage that shadow the histories and even the present-day lives of ethnic group members.
Severe biases against minority languages and their speakers often stretch back many generations into the past, sometimes many centuries into the past. The rise of nationalism in the last century and a half has had a tendency to exacerbate the situation for minority-language populations, increasing direct central government influence over outlying regions which in the past enjoyed more independence in spheres that affect language use. More and more exposed to majority-group governance and ideology, members of small language communities can come to perceive adoption of the dominant language as the likeliest route to social acceptance and economic opportunity.
Because of the cumulative effects of long-continued social bias, one can encounter in one and the same heritage group both deep yearning to strengthen or recover the traditional language and great reluctance to reassociate themselves with a language that brought scorn and disdain to parents and grandparents. Languages have no standing of their own, but instead reflect the standing of the people who speak them. If a particular language is spoken exclusively by the poorest and least esteemed segment of the society, it will itself be poorly esteemed. For this reason languages can go rapidly from highly favored to severely disfavored if the fortunes of its speakers change radically, as happened for example with the Incas of Peru and their Quechua language. It was socially supreme before the arrival of Europeans, but reduced after conquest to a stigmatized local language subordinate to Spanish.
If social bias coincides, as it often does, with lesser economic development in an identifiable minority-language region, the combination of stigma and lack of prosperity is likely to undermine the vitality of the language and interrupt transmission of the disfavored minority language in the home and the community. Economic self-interest will then favor acquisition of the majority language in such circumstances, and if the standing of the minority language is low enough, it also favors abandonment of the minority language. If it’s better not to be identifiable as an Arapajo in Wyoming, or an Arvanite in Greece, or a Quechua speaker in Peru, then one of the simplest forms of dissociation is to abandon the ethnic language.
When the failure of home transmission has become severe enough, hopes for maintaining and revitalizing the language necessarily become a matter of providing educational support for children’s acquisition and provision for the even more extensive support that might produce adult second-language learners. Both of these approaches are unavoidably expensive. For minority-language schooling, such things as classroom space, staffing, and some level of curricular development will be needed, and in many cases also orthographic planning, lexical expansion, archiving mechanisms, and so forth. For adult second-language learning, teaching techniques and materials hat are specially targeted to breaking through the deeply established first-language habits must be developed, and then also social environments provided that encourage use of the second language in the learners’ lives. Adult second-language learning is slow compared to children’s acquisition, requiring extensive reinforcement, and it, too, involves considerable cost.
But Lakritz is right to point out that money is not the ultimate barrier to preserving endangered minority languages. The people who belong to the ethnic groups in question have to be themselves the major force for revitalization. They have to want their languages to survive fiercely enough to work through the difficult process of transforming what are often private-sphere languages, used mainly in hearth-and-home settings, into more public-sphere languages, used for example in broadcasting and political life. They have to reorder their social interactions so that they can feel comfortable speaking to contemporaries, children, and non-group members in a language that they previously used almost entirely for small-group solidarity or perhaps only with older relatives. They have to feel strongly enough about the value of reclaiming a heritage language to stand up to critics within their own group who see the effort as futile and fear that it will reawaken painful stereotypes that the group suffered from in the past. This is the truly hard part of maintaining and revitalizing minority languages, and it’s true that it can’t be done by other people or brought into being by official funding, even when it’s generous.
Where this fierce desire is present, however, and heritage-language activism is strong enough to refocus group members’ attention on the heritage language, outside funding can make a real difference, supporting measures to reverse some of the damage done over the long — often very long — period when the language was disdained or suppressed. The damage was done over a long time, and repair will also take a long time. Today there is a rising sense that people are entitled to their own language, that human rights include the right to one’s own group language. Undoing injustices and repairing damage are worthy goals, no less with regard to language than with regard to other facets of life. Certainly not universally, but at least increasingly, rights-oriented governments like Canada’s are lending substantial support to maintenance and revitalization efforts. Revitalization initiatives have proliferated around the world in recent years, as minority-language groups have recognized the precariousness of their linguistic heritages and are trying to improve the odds against the survival of their languages. These groups have difficult histories behind them and difficult challenges ahead of them. They will need help, legal and financial, from governments willing to do as the Canadian government has done in moving to counteract the effects of historical wrongs and long-term social disadvantage. Majority-language populations will also need help. They will need journalists who can make clear the long gestation period that led up to the world-wide language endangerment crisis of our time and will make understandable the investment of time and money that is needed to help at-risk language communities recover.
Universitat Autònoma de Barcelona, UAB
The Myth of Monolingualism
Japan’s monolingualism has been deeply rooted in Japanese society for many centuries and today it is still preserved, both explicitly and implicitly. In 1986, Yasuhiro Nakasone, Japanese prime minister then, publicly stated that “Japan is a monoethnic country and therefore minorities do not exist.” His words were controversial and organisations defending the Ainu nation (Japan’s indigenous people) protested furiously against the prime minister. Later, politicians who see Japan as a monocultural, monolingual and monoethnic country repeated similar arguments, as though there were no linguistic diversity in Japan. This view is underpinned by the firm belief that Japan’s linguistic problem is not politicised, although the truth of this view has been called into question over the last decade.
The three basic concepts: ‘kokugo, nihongo and bokokugo’
We can observe this traditional view of monolingualism in several ways. Firstly, we focus on the various names used for the Japanese language: ‘kokugo, nihongo and bokokugo’. The term ‘kokugo’, which literally means “the State language” ―often translated as the “national language” ― appeared when the modern Japanese nation state was established in the late 19th century and has functioned as a synonym for the Japanese language up to the present day. The second term, ‘nihongo’, is normally used to refer to Japanese as a second language for foreigners in a more neutral way, although the concept originated in the historic sense of Japan’s colonial policy for Taiwan and Korea during the Second World War. This term was put in place to preserve the national symbolic and sacred nature of the ‘kokugo’, as applying the same idea of ‘kokugo’ to inhabitants of the colonies was felt to be unsuitable.
The term ‘kokugo’ refers to Japanese as a school subject. The ‘Encyclopedia for Studying the Teaching of Kokugo’ by the National Institute for Japanese Language and Linguistics defines the teaching of ‘kokugo’ on the assumption that all Japanese people speak Japanese as their mother tongue. Here we find another key concept, that of ‘mother tongue’. The two translations normally made of this term, ‘bogo’ and ‘bokokugo’, are used interchangeably without a very clear criterion. The fact that the ‘Iwanami Kokugo Dictionary’ defines ‘bogo’ as a synonym of ‘bokokugo’ confirms the confusion between these two terms. The use of the term ‘bogo’, meaning “mother tongue” is not at all normal in Japan, so it is rarely listed in the dictionaries. On the other hand, ‘bokokugo’, which literally means ‘the language of the home country’, is much more familiar to the Japanese. This term evidences the coherence between state, people and language.
All in all, Japanese enjoys the status of being the country’s sole language, which can be demonstrated by the fact that the concept of ‘official language’ is not used in Japan. Neither is there any legal ruling on the Japanese language declaring its status as official language, except for article 74 of the ‘Saibansho hô’ law on the administration of justice, which states that Japanese must be used in the country’s courts of law.
The ‘Kikokushijo‘: reflection of Japan’s ideology
One example that clearly reflects the predominant ideology in Japan that places nationality on a par with the Japanese people’s language is the name ‘kikokushijos’ used for children. ‘Kikoku’ means ‘returning to their country ’ and ‘shijo’ means ‘children’ . These are Japanese children who were born and/or lived abroad because of their parents’ jobs and who have returned to Japan. Depending on factors such as the country they were living in, the type of school they went to, their interpersonal relationships and so on, these children had different experiences in each country and their language skills can also vary. Equally, it is assumed that these children are fluent in the foreign language — which is presumed to be English — and their schoolmates in Japan typically demand that they ‘say something in English’. This phenomenon illustrates the fact that the image of a bilingual person in Japan is often thought of as someone who is fluent in English. Plus, these children are also presumed to have a much lower level of Japanese, which leads to them being seen as “Japanese strangers”, “half Japanese” and even gaijin (“foreigners”). This means that having a standard level in this language serves as a criterion for “being Japanese”.
These children have had a major impact as, up until a few decades ago, living abroad was not at all common in Japan, and the return of Japanese children with the experience of having lived in a foreign country was regarded as a threat. Instead of accepting heterogeneity, Japanese society makes ‘kikokushijo’ give up everything they have acquired abroad, including their foreign language skills, as this is incompatible with Japanese society and they are under pressure to re-adapt. This ungenerous attitude to heterogeneity comes from ‘Nihonjinron’ (Japanese identity theory), an ideology that values Japan’s homogeneity and distinctive nature. There is a Japanese proverb that says, “Deru kui wa utareru” (“The stake that sticks up gets hammered down”); it is difficult to stand out with a “different” way of behaving as this breaks harmony and uniformity. This means that ‘kikokushijo’ often prefer to blend into Japanese society rather than keep up their fluency in other languages in order to protect themselves, as being bilingual or multilingual is incompatible with Japanese society’s preference for uniformity. Despite this, in the 1980s, when the Japanese government began using the concept of ‘internationalisation’ more frequently, the negative view of ‘kikokushijo’ ceased to be predominant and these children went from being a discriminated minority to becoming a symbol of internationalisation and even objects of admiration.
‘Hyôjungo, kyôtsûgo and hôgen’
Japanese society’s huge respect for uniformity can also be seen in the Japanese language itself. In the 19th century, with the fall of the Tokugawa shogunate, Japan embarked on a period of modernisation, which gave rise to a new government. To form a sufficiently strong nation state that could compete with other countries, the people had to be united. The unification of the language was one measure, as up to then, Japan’s linguistic diversity had been huge, and the country was divided into two hundred and fifty-six ‘Han’ (domains governed by a feudal lord who paid taxes to the central government) with very little contact between them and with such a different range of variants spoken that it was impossible for people from different ‘Han’ to communicate with each other. Plus, differences between social classes also contributed to this linguistic diversity.
To achieve ‘a uniform language’, the variant spoken in the high area of Tokyo was chosen as the standard (‘hyōjungo’). The standard language is also called ‘common language’ (‘kyôtsûgo’), and started being used in 1951, as set out in the regulations governing teaching in schools, as opposed to regional dialects, since the standard could be understood all over Japan.
Other regional variants were given the status of dialect (‘hōgen’); they began to be regarded as “bad” habits to be corrected or excluded. In Okinawa, for example, the islands in the south of Japan, they even adopted a punishment system known as ‘Hōgen-fuda’ (dialect label). Regional variants were simply seen as an ‘accent’ of standard Japanese and were not usually written down. We can see this treatment of regional variants in the guidelines for teaching ‘kokugo’ up to the late 1950s, intended to correct accents and promote the standard language.
This meant that teaching ‘kokugo’ prompted a hierarchical relationship to be set up, with the variant spoken in Tokyo at the top and all the other variants below it. As a result, people who could only speak their own dialect started to feel inferior. Plus, the speakers of each dialect were given a stereotyped image, often negative (so, the people speaking the northern Japan variant were viewed as simple and rustic; speakers of the Osaka variant were seen as funny, tight-fisted, vulgar; speakers of the southern Japan variant were regarded as masculine, abrupt, etc.). But the situation gradually changed; not only did the speakers of regional dialects begin to master the standard language thanks to a mass media campaign, but social attitudes to regional variants started being more positive. Lately, these variants have even become ‘trendy’ with the younger generation, with the media often discussing the dialects issue; dialect conversation guides are available to buy, and ‘famous’ young men and women who would never have dreamt of speaking their dialect to the media twenty years ago are doing it now. In fact, some words or expressions from the various dialects ― especially those spoken in Osaka because of the success of the comedy culture there ― are being used by the younger generation to communicate with each other, even though they may not be speakers of these variants. So, dialects have become a kind of entertainment in which people choose a particular one according on the image assigned to it and ‘virtually transform themselves’ by speaking it. We do need to remember, though, that these words or expressions used by young people are not necessarily based on those that have become part of real everyday life, but that they frequently contain ‘virtual dialects’, in other words, dialects associated with their images. Yukari Tanaka, a Japanese dialectologist, has called this phenomenon ‘hōgen cosupure’ (costume play of dialects), meaning that these young people dress up in the dialect rather than in clothing. The ones who do not have their own dialect ― especially people from Tokyo or the surrounding areas where the difference in dialect is barely noticeable ― are envious of the ‘native’ dialect speakers and often become ‘false speakers’ of the dialect they like. As a consequence, regional variants have left their inferiority complex behind and acquired a kind of prestige with added value.
As we have seen, Japan’s linguistic diversity was not accepted because of the huge respect for uniformity and an ungenerous attitude towards heterogeneity. It is quite ironic that both ‘kikokushijo’ and speakers of regional variants have gone from being objects of disdain to being objects of admiration.