skip to navigation skip to content
- Select training provider - (Cambridge Digital Humanities)

Theme: CDH Methods Fellow Workshop Series

Show:
Show only:

27 matching courses


Applications for this workshop have now closed.

As religious services and communities have shifted online so too have scholars of religion. But at what cost? These sessions raise some of the epistemological and ethical issues of doing fieldwork in a digital environment from an inclusive anthropological perspective with a close-up on a particular case study in each session.

The first session considers conducting virtual ethnography, what is gained and what is lost, with a focus on ethnography with Orthodox Jewish populations; the second session assesses digital surveys of religious communities and their attitudes e.g. what the 'bean-counters' might miss (and strategies not to) and finally in the third session we problematize the ethical tensions in online studies of community media with a particular focus on French Muslim media, already heavily surveilled.

The sessions are intended to develop researcher knowledge and explore cross-cutting issues that concern a broad spectrum of humanities and social science-based scholarship serving as;

  • a forum for the critical discussion of digital methods and epistemologies,
  • a place to learn more about specific case studies particularly in the UK and France, and
  • an assembly of early research minds in the throes of a related or relevant project themselves who wish to share and learn from one another

Applications for this workshop have now closed.

Corpus linguistic approach to language is based on collections of electronic texts. It uses software to search and quantify various linguistic phenomena that make up patterns, which it then compares within and across texts based on their frequency. Corpus stylistics applies tools and methods from corpus linguistics to stylistic research. Corpus stylistics mainly focuses on literary texts, individual or corpora. Corpora are here, usually, principled collections of texts, for example a collection of texts by one author, or texts from a specific period. It focuses both on more general patterns and meanings that are observable across corpora and patterns and meanings in one individual text. In terms of quantitative approaches that corpus stylistics employs, it is in many ways similar to work that is referred to as ‘distant reading’ and also ‘cultural analytics’. These approaches emphasise the gains that we get from looking at texts from “distance”, i.e., in large quantities. For corpus stylistics, it is the relationship between quantitative and qualitative that is central. Therefore, research in corpus stylistics often deals with much smaller “cleaner” data sets, so that the qualitative step in the analysis is more manageable.

This workshop aims to introduce the basic corpus linguistic techniques and methods for working with literary and other texts. It aims:

  • To provide an introduction to corpus linguistics in relation to digital humanities approaches;
  • To develop critical understanding of how data representativeness used in quantitative research may influence results;
  • To critically examine the relationship between quantitative and qualitative textual analyses;
  • To provide a practical toolkit for computational textual analysis.

The aim of this course is to support students, researchers, and professionals interested in exploring the changing nature of the English vocabulary in historical texts at scale, and to reflect critically on the limitations of these computational analyses. We will focus on computational methods for representing word meaning and word meaning change from large-scale historical text corpora. The corpus used will consist of Darwin’s letters from the (Darwin Project https://www.darwinproject.ac.uk/) at Cambridge University Library. All code will be in online Python notebooks.

If you are interested in attending this course, please fill in the application form

Methods Fellow Workshop: Audible knowledge: soundscapes, podcasts and digital audio scholarship

Dr Peter McMurray (CDH Methods Fellow)

With the rise of web-based scholarship and affordable digital audio equipment, artists and researchers are increasingly turning to audio formats as way to share their work with a larger audience and to cultivate new forms of knowledge rooted in listening. This workshop will offer an introduction to digital audio recording and editing (using Reaper, a digital audio workstation which can be downloaded/used for free on an extended trial basis). We will focus particularly on the editing choices for soundscape composition and podcasting, and participants will have the opportunity to produce a short audio piece over the course of the workshop.

Itamar Shatz - Methods Fellow CDH

This course will introduce participants to key concepts in statistical analyses, including statistical significance, effect sizes, and linear models. The goal is to give participants the basic tools that they need in order to understand the use of statistical methods by others and to use these methods effectively in their own research. We will focus on an intuitive and practical understanding of statistical analyses, rather than on the mathematical details underlying them. As such, the course will be accessible for those without a quantitative background, although it will help to have knowledge of basic descriptive statistics (e.g., mean and standard deviation).

The course will cover (approximately) the following topics:

  • Session 1: statistical significance and statistical tests (including hypothesis testing, p-values, statistical power, t-test, and chi-square test).
  • Session 2: effect sizes, correlation, confidence intervals, and outliers.
  • Session 3: linear regression (including simple/multiple regression, residuals, beta coefficients, and R-Squared).
  • Session 4: linear regression continued (including test statistics, standard errors, centering, interaction, categorical predictors, linear models, and assumption testing).

Isabelle Higgins, Methods Fellow - Cambridge Digital Humanities

This Methods Fellows' Workshop Series event aims to encourage participants to think critically and reflexively about the nature of digital humanities research. It will explore (both individually and collectively) the function and effect of critical, intersectional and decolonial research methods and their impact on research fields, participants and research outputs.

For each seminar, participants will be provided with a reading list that will contain both core introductory texts and additional readings. They will be expected to do 30 minutes of reading ahead of each seminar. The seminars themselves will be a mix of presentations, small group discussion and the study of specific empirical cases.

Throughout the seminars we will collectively assemble a shared bibliography of academic texts and other digital resources. Participants will also be encouraged to bring and share examples and challenges from their own research.

To increase space for discussion and critical reflection, participants will be encouraged to form small working groups, focused on the seminar theme they find most productive, and to connect with their working group for a 30-minute call to reflect on their chosen seminar outside of the scheduled four hours of teaching. There will be the option to feed back on these discussions to the wider group, deepening our shared understanding of the content covered in the course. Isabelle will also hold virtual office hours following the seminar series. In these ways and others, the series will aim to cater for those new to this area of research, as well as for scholars who are already working in digital humanities.

Key topics covered in the sessions will include:

  • Seminar 1: Digital Humanities in Social and Historical

Context: Considering what and how we research

We will focus on placing digital humanities, as a discipline, in the context of its emergence. Disciplinary Sociology, for example, is increasingly grappling with its colonial past (Meghji, 2020). What happens when we examine the history and context of digital humanities? McIlwain (2020) reminds us of the historical ties between the development of computational technology and the surveillance of Black bodies. Yet digital humanities research has also sought to challenge the legal, social and political power exercised through digital systems (Selwyn, 2019). Does contextualising our methods change how we approach them?

  • Seminar 2: Critical approaches to Digital Environments: Affordances, Interfaces, AI, Algorithms

We will draw on the vast range of work produced by race critical code scholars, which help us to explore the assumptions and inequalities that are coded into the software we study (or use to conduct our studies). Ruha Benjamin (2016a:150) reminds us to ask of digital technology: 'who and what is fixed in place – classified, corralled, and/or coerced, to enable innovation?' How does a consideration of encoded digital inequalities affect our methodologies?

  • Seminar 3: Critical Engagement with User Generated

Content: Beyond content & discourse analysis

We will draw on critical theories that draw attention to the digital and social constructs and conventions that shape the production of user-generated content, with Brock's (2018) Critical Techno-Cultural Discourse Analysis as one such methodological contribution. We'll explore what happens to our research when we broaden our methodological framing, considering the type of content produced by users and how it is produced, who is producing it, and what governs this production.

  • Seminar 4: Looking forward: Our roles as researchers in Digital Humanities

We will pay attention to the growing calls from a range of cross-disciplinary scholars who invite us to actively consider the impact of our methods on the future. We'll explore different notions of methodological responsibility and innovation, from the speculative (Benjamin, 2016b), to the caring (de la Bellacasa, 2011), to the adaptive and inductive (Markham & Buchanan, 2012). What happens when we place our research into its broader context and consider how our methods will shape the future of our discipline?

Methods Fellows Series | Social Network Analysis new Tue 8 Mar 2022   14:00 Finished

Thomas Cowhitt, Methods Fellow - Cambridge Digital Humanities

This Methods Fellow's Workshop Series event will introduce users to social network analysis in R. Participants will be asked to generate their own relational dataset. We will then use several R packages to visualize and interpret relational data. By the conclusion of this course, users will be able to construct a relational dataset, load and clean this dataset in R, and generate static network diagrams and reports on descriptive network statistics.

This course looks at how modern computational techniques in logic can be used to approach historical questions in the history of logic while also reflecting on the differences and similarities between historical and modern approaches to logic.

Historically, the course will focus on two authors’ approaches to modal logic, the branch of logic that deals with possibility, necessity, and contingency. Ibn Sina (9th century) and John Buridan (14th century). Using these two authors and their discussions of logic as a starting place, we will look at how their logical systems can be represented and formalised using contemporary computational methods, as well as reflecting on the similarities and differences between historical approaches to analysing validity and its relationship to modern notions of algorithms.

The overarching aim of the course is to develop the framework that allows us to computationally show that Buridan and Ibn Sina are working with the same modal logic under two different presentations.

This course demystifies principles of data visualisation and practices of graph creation in Python to help trainees better understand and reflect how Good Data Visualisation under “5 Principles” can be achieved, and develop Python’s application in data visualisation beyond analysis. This course is aimed at students/staff who are interested in and/or use data visualisation in research or outreach and hope to explore data visualisation in Python with basic Python knowledge. It is delivered in a format of 4-hr workshop (on Zoom) + c. 2hr self-paced preparation and post-class exercises+ 1hr asynchronous question-shooting, combining theories, case learning, peer interactions and practical: we first present an introduction on key concepts of and problems in data visualisation, before case studies and group discussion on data visualisation principles and how to visualise data better in practices; then under a demonstration, we employ Python to visualise data and go through types of graphs.

This course will be of interest to academics at all levels (including PhD students) who travel to remote locations (including small libraries worldwide) to access their primary material (often pamphlets and hand-written ephemera) which they are interested in digitising not only for their own scholarly appraisal, but also as a means of enabling access to the wider academic community. We will go step-by-step through preparation of materials, cataloguing systems, rigs and illumination, tethered photography using Lightroom, smartphone lenses and Halide, and packaging and checksums. We will also be discussing theoretical and ethical questions around decolonisation, reparation, and handling of Black and Indigenous heritage.

Methods Fellows Series | Visualising Data Clearly new Wed 4 May 2022   14:00 Finished

If you've ever collected some data but weren't sure how to go about visualising it in a way that could help you uncover new insights, or if you've struggled to present data in a way that helped others understand your findings, this course is intended for you.

We'll talk about how to select the right visualisation for your data, discuss the pros and cons of different approaches, and get hands-on experience displaying information in clear and compelling ways. We'll also discuss broader issues surrounding visualisation science, such as common ways that visualisations are misinterpreted and how to avoid them, and controversies around what counts as best practice in visual communication.

In addition to the weekly online sessions, participants are expected to spend around two hours per week applying the skills learnt to gain greater fluency and enable us to 'workshop' each other's visualisations.

Your participation will also benefit if you have the chance to take our "Give me 5! Principles of Data Visualisation", which is scheduled for 23rd & 30th March. However, attending this workshop is not a prerequisite, so please do not be deterred if you miss the dates.

Do you need a database for your data? Or could you store the data in standalone files? Which database paradigm should you consider? What are the consequences of these choices on your work routine? How to navigate all of this with minimal or no programming experience?

These and more are the questions we will address in the course. We aim to provide a gentle introduction to databases and database paradigms, with examples that help explain the differences between the most common database packages and guide researchers to design suitable solutions for their data problems.

These workshops will offer participants the ability to re-think the graphic design of a musical score and will work with a novel set of principles to modify the spacing, layout, and position of its notes and signs for intelligibility purposes and/or artistic purposes.

In previous experimental research, Arild has found that musical scores with modified engraving, spacing, and layout rules can —at least in certain practices and for certain repertoires— elicit more fluent and precise readings than conventional scores. The abstraction of informational units and of discourse structure from a score seems to be enhanced by his approach of separating and redistributing notation symbols and other visual materials using a digital (quantifiable, taxonomic) hierarchy of divisions comparable to what is nowadays conventionally applied in (Western) language texts. This seems to be facilitating the decoding and apprehension of information, affecting the conversion of notation into performance; it is also being investigated at present in terms of academic and artistic impact.

Participants will be able to use the flexibility and manageability of digital production to introduce a radically new conception of the visual structuring of a musical score: Arild proposes to go beyond the mere reproduction of analogical models with digital tools; for that, participants will be experimenting with novel flexible spacing, layout and visual structuring cues that could be enhancing, in music reading, the integrative and abstractive processes that fluent readers already use in language (we do not read sequentially letter by letter; good readers group, prioritise and predict the symbols presented to them). This approach is intrinsically digital, as it is based on being able to use the symbols of a score in a modular, movable, and experimental manner —and in this context 'experimental' would naturally include heuristic or intuitive manipulations by the score users. Arild's view is that a novel conception of music notation should include the possibility of re-organising the materials, allowing the user at either end (creator or reader) to group, separate, highlight and grade visually the symbols present in a score.

This project begins from the premise that ‘transparency’ is not clear at all. Transparency is historically mediated, culturally constructed, and ideologically complex. Understood expansively, transparency is enmeshed with a variety of functions and associations, having been mobilised as a political call to action; a design methodology; a radical practice of digital disruption; an ideological tool of surveillance; a corporate strategy of diversion; an aesthetics of obfuscation; a cultural paradigm; a programming protocol; a celebration of Enlightenment rationality; a tactic for spatialising data; an antidote to computational black boxing; an ethical cliché; and more.

Across two workshops, we will explore the multidimensionality and intractability of transparency and investigate how the demand for more of it—in our algorithms, computational systems, and culture more broadly—can encode assumptions about the liberational capacity of restoring representation to the invisible. As a group we will conduct a survey of transparency and its political ramifications to digital culture by learning about its conceptual genealogies; interrogating its relevance to art and architecture; questioning its limits as an ethical imperative; and mapping it as a contemporary strategy of anti/mediation. Drawing on a combination of artworks, historical texts, cultural touchstones, and moving images, these workshops will give participants an opportunity to attend to transparency’s complex configurations within contemporary culture through a media theoretical lens. This project is designed to facilitate collaborative study; foster inter-disciplinary discourse; promote experimental learning; and develop a more theoretically nuanced and historically grounded starting point for critiquing transparency and its operations within digital culture.

CDH Methods | Introduction to R Studio and R Markdown new Mon 21 Nov 2022   13:00 Finished

Convenor: Giulia Grisot (CDH Methods Fellow and a Visiting Academic)

This Methods Workshop will deliver an introduction to R Studio and R Markdown; the workshop will run through the functionalities and advantages of using R Studio and related tools for organising and analysing data, as well as for writing and referencing.

About the convenor: Giulia has a mixed background in Literary Linguistics, Psycholinguistics and Digital Humanities and has gained experience in both qualitative and quantitative approaches to texts and language in general, becoming familiar with several coding languages (R, python) essential for statistical as well as corpus investigations.

Giulia is currently working with large corpora of Swiss German fictional texts, looking at sentiments in relation to represented spatial locations, using both lexicon-based methods and machine learning.

Convenor: Tom Kissock (CDH Methods Fellow)

This Methods Workshop will offer Video Data Analysis for Social Science and Humanities students. It’s a relatively new, broad, and innovative multi-disciplinary methodology that helps students understand how video fits into modern research both inside and outside academia. For example, Cisco has estimated that video will make up 80% of internet traffic and 17.1% of it will be live video which is a 15-fold increase since 2017; therefore, it’s a tool that cannot be overlooked when conducting research.

Tom will address how to use video ethically, for example:

  • Informed consent
  • Storage
  • Privacy

and also practically;

  • Building timelines
  • Coding schemes
  • Presenting research findings

Tom will also plans to include a lesson focussed on viewing livestreams in a reflexive manner as this is a huge topic in the TikTok era

About the convenor: Tom has fifteen years’ experience as a Director, Executive Producer, and Livestream expert for the BBC, YouTube, NBC, and Cisco; coupled with seven years’ experience researching video witnessing and human rights abuses. In 2020 he received his MSc in Globalization and Latin American Development from UCL where his research used Video Data Analysis as a research methodology. He tracked how populist politicians in Brazil built misinformation campaigns by strategically cross-sharing videos to avoid journalistic questioning as a symbolic accountability mechanism during the 2018 presidential elections.

His PhD in Sociology at the University of Cambridge is a loose extension of his MSc, but explores positive aspects of streaming advocacy, such as how Indigenous video activists in Brazil use live video on platforms like Instagram, TikTok, and Kwai to reach audiences to discuss climate change, the environment, and land rights. He is interested in how video can produce knowledge and, subsequently how societies value different knowledge through the process of video witnessing. In his spare time, he serves as the Executive Producer of Declarations: Human Rights Podcast (part of Cambridge’s Centre for Governance and Human Rights), has given lectures on live streaming and human rights at MIT, UCL, and the University of Essex, and has written pieces for LatAM Dialogue and the Latin American Bureau.

Convenors: Leah Brainerd & Alex Gushurst-Moore (CDH Methods Fellow)

Centuries of ceramics. Millenia of maquettes. How do we grapple with large datasets? Join archaeologist Leah Brainerd and art historian Alex Gushurst-Moore to increase your computational literacy, learn how to scrape data from collections databases, and interpret that data through visual means.

Over two, two-hour sessions, you will be introduced to:

  • Collections databases: what they are, how they are built, and how to navigate them
  • Web-scraping: how do you go from a webpage on the internet to a dataset on your computer? A basic introduction to how web-scraping with R *Statistics works with a worked example, ethics of data, and learn how to evaluate a website for future data collection
  • Data visualisation software: what options are available and how to use the open-source, online system mapping tool, Kumu
  • Cultural evolutionary theory: cultural evolution is the change of culture over time; explore a theoretical perspective that views cultural information as an evolutionary process which teaches us, through cultural transmission, more about human decision making

The workshop will take place over two sessions. The first session (30 January) will cover collections databases and web-scraping. The second session (6 February) will cover data visualisation and cultural evolutionary theory. These sessions will consist of practical tutorials and discussion with the course leads. After each session, participants will be given an optional task to try out new skills acquired, on which they can receive feedback from the course organisers.

Convenor: Dita N. Love (CDH Methods Fellow)

Sarah Ahmed and Jackie Stacey wrote that “speaking out about injustice, trauma, pain and grief have become crucial aspects of contemporary life which have transformed notions of what it means to be a subject, what it means to speak, and how we can understand the formation of communities and collectives” (p.2, 2001) in the introduction of the special issue Testimonial Cultures. These workshops ask therefore: what does it mean to centre survivor-knowledge, and witness together the aftermath of intersecting violence, when language and traditional methods often fail to re-present the experience of trauma? How can we avoid tokenising creative-digital research under the pressures of a precarious academy and creative sector?

Convenor: Orla Delaney (CDH Methods Fellow)

What does it mean to prioritise small data over big data?

Cultural heritage datasets, such as museum databases and digital archives, seem to resist the quantitative methods we usually associate with data science work, asking to be read and explored rather than aggregated and analysed. This workshop provides participants with a non-statistical toolkit that will enable them to approach, critique, and tell the story of a cultural heritage dataset.

Together we will consider approaches to the database from the history of science and technology, media archaeology, and digital ethnography. This will be done alongside an overview of practical considerations relevant to databasing in the sector, such as standards like FAIR (Findable, Accessible, Interoperable, Reusable) and CARE (Collective Benefit, Authority to Control, Responsibility, Ethics), specific technologies like linked data, and the results of recent projects aiming to criticise and diversify the underpinning technologies of cultural heritage databases. This workshop is aimed both at cultural heritage professionals and students, and at data science researchers interested in introducing a qualitative approach to their work.

Convenor: Estara Arrant (CDH Methods Fellow)

This methods workshop will teach students three powerful machine learning algorithms appropriate for Humanities research projects. These algorithms are designed to help you identify and explore meaningful patterns and correlations in your research material and are appropriate for descriptive, qualitative data sets of almost any size. These algorithms are applicable to virtually any Humanities field or research question.

  • Multiple Correspondence Analysis: automatically identifies correlations and differences between specific data elements. This helps one to understand how different features (‘variables’ or ‘characteristics’) of one’s data are related to each other, and how strong their relationships are. This can be useful in almost any research project. For example, in a sociological dataset, this analysis could help clarify relationships between specific demographic characteristics (race, gender, political affiliation) and socioeconomic status (working class, education level, income bracket).
  • K-modes clustering and hierarchical clustering: finding groups of similarity and relationship within the entirety of your data. Clustering helps one to identify which variables/characteristics group together, and which do not, and the degree of difference between groups. For example, such clustering could allow an art historian to see how paintings from one decade are characterised by style and artist, as contrasted to paintings from another decade (thus tracking shifts in artistic trends over time)

This workshop will specifically cover the following: Determining when your research could benefit from machine learning analysis. Designing a good methodology and running the analysis. Interpreting the results and determining if they are meaningful. Producing a useful visualisation (graphic) of the results. Communicating the findings to other scholars in the Humanities in an accessible way. Students will actively implement a small research project using a practice dataset and are encouraged to try out the methods in their current research. They will learn the basics of running the analysis in R’s powerful programming language.

Convenor: Dr Eleanor Dare (CDH Methods Fellow)

This Methods Workshop will invite participants to originate innovative research methods using virtual and augmented reality technologies underpinned by theoretical and pedagogic understandings. The session is conceived in recognition of an increasing interest in using virtual and extended reality (VR and XR) to create collaborative research spaces that span different locations, time zones, and spatiality. Such spaces might be used to investigate the impact of design, architecture and location on education or new ways to teach an array of subjects, from language to mathematics to performance, AI ethics and music.

About the convenor: Eleanor is currently the Co-Convenor for Arts, Creativity and Education at the University of Cambridge, Faculty of Education, they are also the Senior Teaching Associate: Educational Technologies, Arts and Creativity, lecturing and supervising on MPHIL Arts, Creativities and Education, MPhil Knowledge, Power and Politics, and MEd Transforming Practice. Eleanor is module lead for AI and Education, a Personal and Professional Development course at Cambridge.

Eleanor Dare’s research addresses the implications of digital technology and virtuality as a material for collaboration, critical-educational games development, performance, worldbuilding and pedagogic experimentation. Eleanor has been involved in several AHRC/EPSRC/ESRC/Arts Council/British Council funded projects investigating aspects of virtual and extended reality as well as projects with the Mozilla Foundation (AI-Musement/Monstrous 2022-2023), Theatre in the Mill Bradford (Bussing Out, 2022) and the Big Telly Theatre Company (via the Arts Council of Northern Ireland) for Rear Windows, forthcoming.

This Methods Workshop explores primary data collection using digital and online qualitative methods. Teaching methods for detailed assessment of the suitability of online platforms for the collection of research data. Considering not only general ethical issues, privacy, encryption, terms and conditions but also inclusivity for neurodivergent and vulnerable participants.

CDH Methods | Writing Interactive Fiction new Mon 27 Nov 2023   13:00 Finished

Interactive Fiction (IF) stories let readers decide which paths the story should follow, featuring non-linear narrative design. The discipline combines the excitement of post-structuralist narratives with the power of creative coding, making it a perfect introduction for participants more familiar with one field than the other. In this workshop, led by Methods Fellow Claire Carroll, we’ll explore both parser-based (rooted in reader instructions and terminal response) and choice-based (hyperlink or multiple choice-driven) IF and work together to write our own interactive fiction. The workshop will also introduce participants to the passionate IF community, which offers advice and support to experienced writers and newcomers alike.

First Steps in Version Control with GitHub new Mon 26 Feb 2024   14:00 Finished

Please note this workshop has limited spaces, and an pre-course questionnaire is in place. Please complete before the session.

Version control helps you to write code for your research more sustainably and collaboratively, in line with best practices for open research. You might use code for collecting, analysing or visualising your data or something else. Everyone who codes in some way can benefit from learning about version control for their daily workflow.

This workshop will cover the importance of version control when developing code and foster a culture of best practices in FAIR (Findable, Accessible, Interoperable, Reproducible) code development. We will take you through the basic use of GitHub to help you store, manage, and track changes to your code and develop code collaboratively with others.

Designed with beginners in mind, this workshop caters to those who have not yet delved into Git or GitHub. While prior knowledge of a programming language (e.g., R or Python) would be beneficial, it is not a prerequisite.

Code in research helps to automate the collection, analysis or visualisation of data. Although the code may fulfil your research objective, you might have wondered how to improve it, code more efficiently, or make it ready for collaboration and sharing. Perhaps you have experienced challenges with debugging or understanding it.

In this intermediate workshop, we will introduce several coding design principles and practices that ensure code is reliable, reusable and understandable, enabling participants to take their code to the next level.

The workshop will begin by introducing the key concepts using ample examples. Participants will then work in groups to apply the concepts either to code provided by the convenor or to their existing projects, with guidance from the convenor. Participants will also have the opportunity to discuss their project goals with the convenor to demonstrate how the best practices can be implemented during the coding process.

This workshop is for individuals who have some prior experience with Python and who, ideally, have a coding project that they wish to work on. Participants are encouraged to arrive with a specific objective or desired output for their coding project. For example, you might wish to pre-process your data, add a specific analysis to your project, or make your code publicly available.

Across two sessions, participants will be introduced to the ancient yet evolving practices of commonplace-book keeping and the ‘modernised’ digital tools and methods for extracting, indexing, sustaining and networking knowledge fragments from personal notes, anthologies and archives for idea generation. Commonplacing—manifest as the classical vade mecums (‘come with me’ book of phrases for rhetors), the early-modern scholar’s indexed bodies of learnings, the eighteenth-century domestic commonplace books of culinary and medicinal recipes and nineteenth-century collaborative records of readings—is as much a method for knowledge compilation as a way to structure collective (and ‘re-collected’) thoughts. The commonplace book’s modern afterlife may be traced in the Zettelkasten method and micro-blogging sites like Tumblr, which facilitate the systematic storage and dispersal of quotations and other media.

The interactive sessions will draw upon the theoretical underpinning of commonplacing as a productive ideation approach as well as new digital tools of translating atomised ‘commonplaces’ (and metadata) into network graphs and databases for visualising potentially hidden connections for research and pedagogy.

Join our Methods Fellow, Amira Moeding in a workshop which introduces methods of historical enquiry into the development of digital technologies and digital data. How can we do the history of technology today? What are the limits of historical enquiry; what are its strengths? Moreover, what can we learn from historical narratives about technologies? More concretely, what can the history of “Big Data” tell us about artificial intelligence today? What were, for example, seen as the pitfalls and problems with biases early on in the development of data-driven applications?

Together with you, Amira will think through and employ methods of historical enquiry and critical theory to gain a better understanding of the origin of ‘data-driven’ digital technologies. Therein, the workshop attempts to bring about both an understanding of the statistical or data-driven methods by asking how they came about and why they became attractive to whom. The workshop thus links technologies back to the interests and contexts that rendered them viable. This line of enquiry will allow us to ask what ‘technological progress’ currently is, how stories of ‘progress’ are narrated by industry actors, and what ‘risks’ become apparent from their perspective. By providing this contextualisation and recovering early interests that drove developments in artificial intelligence research and ‘Big Tech’, we will also see that progress, and the promises for the future that it holds, are not ‘objective’ or ‘necessary’ but localised in time and space. We will raise the question to what degree digital humanities cannot only use digital methods to aid the humanities, but how historical and philosophical methods can be employed to provide a basis for criticising and theorising ‘the digital’ and putting the methods so-called ‘artificial intelligences’ are based on into perspective.

[Back to top]