Exploration of student-generated educational data in LMS

The types of educational activity data captured by a LMS that can be harnessed and translated to actionable knowledge:

  • Click stream data
  • Page views and content access
  • Discussion participation
  • Assignment and quiz submissions

Google Analytics for student’s click stream data:

Data solution 1: Nodes are points through which traffic flows. A connection represents the path from one node to another, and the volume of traffic along that path. An exit indicates where users left the flow. In Events view, exits don’t necessarily indicate exits from your site; exits only show that a traffic segment didn’t trigger another Event. Use the Behavior Flow report to investigate how engaged users are with your content and to identify potential content issues. The Behavior Flow can answer questions like:

  • Did students go right from homepage to assignments/quizzes without additional navigation?
  • Is there an event that is always triggered first? Does it lead students to more events or more pages?
  • Are there paths through a course site that are more popular than others, and if so, are those the paths that you want students to follow?

Behavior Flow: Like all flow reports, the Behavior Flow report displays nodes, connections and exits, which represent the flow of traffic in a course site.

Data solution 2: Funnel Visualization: how students funnel through to a destination page in your course site? https://support.google.com/analytics/answer/2976313 and https://support.google.com/analytics/answer/6180923

Funnel Visualization: The funnel visualization shows the stream of visitors who follow specific paths of a website and thus interact with it in order to reach a website goal. https://support.google.com/analytics/answer/2976313?hl=en

The sample data for the example funnel visualization was gathered from a Canvas (LMS) course site, the goal was set to be the Modules navigation menu. 843 users accessed the course homepage during certain period of time. Of those 843 users. 31 percent of them went from the homepage directly to the course module page (destination). (581-177)/843=48% navigated to a different page of the course and 177(21%) exited the course.

The funnel conversion rate (59.20%) indicates the percentage of visits that included at least one page view for the first step before at least one page view of the goal page. Page views can occur non sequentially for a funnel match. We can look at each step of the funnel, analyze the number of users to the first step versus the number of users to the second step. Wherever we lost a drastic number of people, we can go back to that page and optimize it to increase that conversion rate percentage.

Social Network Analysis for discussion interaction data:

  • How active do students interact with each other on online discussion forums?
    • identify the students who are actively engaged in discussions by providing many comments to peers’ postings.
    • identify the students whose initial discussion thread became so popular that received quite a number of replies.
  • Does the quantity and/or richness of discussion posts vary across topics?
  • Does the community structure of discussion interactions represent subgroups of students who have common interest in reality?
  • Does discussion interaction patterns represent or reflect students’ participation in class activities?
  • Does the role modeling using centrality metrics represents the level of influence of a student in reality?

Histogram and scatter plot for quiz submission data (quiz performance and correlation between quizzes):

samplequizexam

  • How well an individual student did in comparison to the entire class?
  • What was the overall performance on a quiz?
  • Is there a relationship between quiz performance and content access, or overall activities in a LMS?

References:

https://journal.r-project.org/archive/2016/RJ-2016-010/RJ-2016-010.pdf

 

Role Modeling in Online Discussion Forums

As LMS becoming more widely adopted in fully online, hybrid, blended courses, its asynchronous discussion platforms are often used as the channel for information exchange and peer-to-peer supports. For F2F courses that leverage online discussion forums as a complement to classroom communications or a tool for flipped-classroom that facilitates active learning, asynchronous discussion activities correlate to higher engagements in courses and better performance overall. Under this notion, insights into roles in discussion forums can contribute to improved design and facilitation for asynchronous discussions.

In light of the research conducted in the field of role mining for social networks (Abnar, Takaffoli, & Rabbany, 2014), we limit our focus on the roles which have been identified in social contexts, and we re-defined them in the context of asynchronous discussions.

We developed a Shiny application using social network methods, centrality and power analysis, to analyze and visualize online discussion interactions. Degree and closeness centrality scores are used to identify leaders and periphery/outermosts, and mediators yield a high betweenness centrality score. The graphs shared below were produced in the application.

Graph 1: each node represents an individual, the color corresponds to a group/community.

Roles derived from asychronous discussion activities

Leaders: the most active individuals in online discussion forums, i.e., posting well-thought threads that welcome peers’ comments and meanwhile, providing feedback to peers’ postings.Peripheries/Outermosts: the least active individuals in an online discussion forum, who posted few threads, which got none responses from peers, and replied to few peers’ postings.Mediators: the individuals who connect different groups in a network.Outsiders: the individuals who had minimum participation in a discussion, i.e., posted one thread to a discussion topic.

Implications

When asynchronous discussions are structured and designed to promote deep learning through collaborations, such as seeking information from peers, suggesting alternative solutions and providing answers/feedback, it would be desirable to help participants move from the periphery of the information exchange network to the core. When an online discussion forum with a well-defined topic or prompt is used primarily for students to post responses to the topic, instructors can incorporate incentives into the discussion forums to motivate learners to participate in discussions in a constructive manner (Hecking, Chounta & Hoppe, 2017).

REFERENCE:

Abnar, A., Takaffoli, M., Rabbany, R., & Zaiane, O. (2014). SSRM: Structural Social Role Mining for Dynamic Social Networks. 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining.

Hecking, T., Chounta, I., & Hoppe, U. H. (2017). Role modeling in MOOC discussion forums. Journal of Learning Analytics. 4(1), 85-116.

Leveraging Canvas quiz submission data to inform quiz design

Quizzes are often used as an assessment tool to evaluate student understanding of course content. Practice quizzes, an informative self-assessment, have also been utilized to help students study for final exams. This self-assessment capability through the use of the practice tests enhances the usage level of course materials.

If course instructors used quizzing assessment strategy in Canvas, we can gather quiz submission data and use it to analyze the effectiveness of quiz questions. By analyzing the quiz submission data, course instructors are able to verify whether a quiz is effective in helping students grasp course content, and whether the quizzes produce meaningful data about students’ performance and understanding of course materials.

In this blog, we will introduce a self-service tool that leverages the quiz submission data to inform student learning and the efficacy of quiz designs in helping students master course materials. If a quiz is particularly designed for students to study for a high-stake exam or a formative assessment, we can use a scatter plot (with a smooth regression line) to see whether there is a correction between student performance on the practice test and on the final exam. If faculty implements a pre and post test to evaluate the efficacy of an instruction in helping student grasp course content, we can use a density plot to display the distribution of score percentage (kept_score/points_possible) for the pre and post quiz.

Canvas built-in Student Analysis tool allows course instructors to download quiz submissions data for one quiz at a time, and examine student performance. However, it is cumbersome if course instructors would like to to gather submission data for all quizzes in a course.

Course Instructors can install an userscript that gathers the submission data for all quizzes in a course:

  1. Install a browser add-on: Greasemonkey for Firefox or Tampermonkey for Chrome/Safari. Please skip this step if you have already installed the add-on previously.
  2. Install the Get Quiz Submission Data userscript.
  3. Login into Canvas, go to a course, navigate to the Quizzes page, scroll down to the bottom of the page and click on the “Get Quiz Submission Data” button.
  4. Save the data as ‘Comma Separated’ csv file format to your local computer, you may name it as ‘quiz.csv’
  5. Open Shiny app https://jing-zen-garden.shinyapps.io/quizzes/, load the quiz.csv file to the app, and a series of visualizations of the submission data will be created for you.
    • The plot shows student score percentage in comparison to the mean and median score percentage for the class side by side, which allows course instructor to easily see where a student is at in relation to the entire class.
    • If a quiz is particularly designed for student to practice for a high-stake exam, we can use a scatter plot (with a smooth regression line) to see whether there is a correction between student performance on the quiz and on the exam.
    • If faculty would like to use a pre and post test to evaluate the effectiveness of an instructional strategy in helping student grasp course content, we can use a density plot to display the distribution of time_spent on the pre and post quiz and percentage (kept_score/points_possible) for the pre and post quiz.

 

 

 

Learner Content Access Analytics

If you are interested in exploring learner content access data to inform your course design, you are at the right place. This blog is geared to inform instructors and course designers about the efficacy of a course design, such as how many students returned to access a course content after a course ended and how often? Which format/type of content was mostly viewed? How often did learners access course content while the course was in session?

In this blog, we will demonstrate self-service tools that allow course instructors to answer the questions with regard to how students interact with Canvas. We will show you how to download student access report data for a Canvas course using an user script, and upload the data file to a Shiny app that visualizes student engagements in the Canvas course.

The Shiny app produces a number of visualizations for student content access activities over time. The information provides course designers/instructors with insights about the efficacy of a content design. For instance, if you embedded a number of files in a page hoping students review them, it is helpful to know whether students accessed the page, which files in the page students were more likely to view, and which files they rarely clicked on.

A user script is a script that runs in a Web browser to add some functionality to a web page. The user script we are going to use is to add a ‘get user page views’ tab on a Canvas course People page. To enable a user script you need to first install a user script manager. For Firefox, the best choice is Greasemonkey. For Chrome: Tampermonkey. Once you’ve installed a user script manager, click on Student Usage Report Data userscript, and click on the Install button. The script is then installed and will run in a Canvas course site it applies to.

Quick Installation of an userscript that downloads the access report data for an entire course

  1. Install a browser add-on: Greasemonkey  for Firefox or Tampermonkey  for Chrome/Safari
  2. Install the Student Usage Report Data userscript.
  3. Login into Canvas, go to a course, click on the ‘People’ course menu and navigate to the People page. (If you don’t see the tab after you have successfully installed the user script, please refresh the People page) 
  4. Click on the ‘Get User Page Views’ tab, and click on ‘Start’ to begin data extracting process.
  5. After the page views info for every student is extracted, you will be prompted with a dialogue box asking you to either save or open the file.
  6. Open the file in Excel, and save it as a ‘Comma Delimited’ file on your local computer.

Loading the data file to a Shiny app that analyzes and visualizes the data

  1. Click on the link https://jing-zen-garden.shinyapps.io/CanvasAccess/ to open the Content Access Analysis app.
  2. Click on the Browse button to upload the student usage report csv file to the app, and the visualizations for students content access will be created for you.
    • Category refers to the content type: announcements, assignments, collaborations, conferences, external_urls, files, grades, home, modules, quizzes, roster, topics, and wiki.
    • Title is the name of a specific content that you defined, such as a file name, a page title, an assignment title, a quiz title, etc..
    • The time series plot visualizes student content access by first and last access date.
    • The primary reason for referencing the Last Access Date is to examine whether or not students access course content after a course has ended. And is there a pattern as to when they are more likely to revisit a course site after the course ends?
    • In addition, we added a date range control widget to the timeseries plot, which allows course instructors to analyze course access between a date range. For instance, course instructors can select a date range to see whether students revisited course materials after a course has ended, or whether students leverage course materials to prepare for an exam right around the exam date.

Debug:

  1. If you get an error message after you load the access_report csv file to the Shiny app, “Error: replacement has 0 rows, data has #####”, the error is a result of mismatch in headers (column names), please open the csv file in Excel, make sure the data file includes the following headers, and there is no space in each header: UserID, DisplayName, Category, Class, Title, Views, Participations, LastAccess, FirstAccess
  2. If you get an error message for a time series plot like this, “Error: ‘to’ cannot be NA, NaN or infinite“, please open the csv file in Excel, and save it as ‘Comma delimited’ csv. Reload the data file to the Shiny app, and the time series plot should show up properly.

Data visualization in Treemaps

Treemap is a visual representation of a data tree, where each node is displayed as a rectangle, sized and colored according to values that you assign. Size and color dimensions correspond to node value relative to all other nodes in the graph. (https://developers.google.com/chart/interactive/docs/gallery/treemap)

When your data has a nested/tree relationship, a treemap can be an efficient way of presenting your data. The reason being is that when the color and size dimensions are correlated within a data tree, one can often easily identify patterns that would be difficult to spot otherwise. “A second advantage of using interactive treemaps is that, by construction, they make efficient use of space. As a result, they can legibly display many items simultaneously.” (https://en.wikipedia.org/wiki/Treemapping)

For instance, we can use two charts to present two sets of data that have a ‘tree’ or hierarchical relationship.

chart one – parent nodes chart two – child nodes
chart1 chart2

We can combine the two sets data and use treemap to visualize the data tree in nested rectangles.

Below, I include two treemap visualizations for the same data tree. In comparison to treemap one, treemap two makes elements highlight when moused over, and set specific colors for certain elements to use when this occurs.

treemap one – nested treemap two – hightlights
chart3 treemap2

This above treemap graph allows me to more easily spot a pattern than using the two bar charts to identify:

  • Among the users who posted discussion threads, they are likely to watch videos as well
  • Among the users who clicked on FAQ, they tend to participate other activities as well

Another more complex example is available at https://jqi.host.dartmouth.edu/1176treemap.html

  • The root level represents the level of completion status relative to all the nodes
  • The first nested nodes correspond to individual participants who went through certain percentage of the course modules
  • the second nested nodes correspond to the individual pageview activities

Using a motion chart to illustrate page view activities over time

motionchart2A motion chart is a dynamic chart to explore several indicators over time.

In this blog, we will demonstrate how to use a motion chart to illustrate student page view activities over time, in an attempt to examine whether there is a pattern between indicators, such as cumulative_page_views and a given date. The chart used in this blog was built with user-in-a-course-level participation data that Canvas (Learning Management System) collects: The data was harvested using the API endpoint that Canvas provides: /api/v1/courses/:course_id/analytics/users/:student_id/activity. An example of using a ruby script to gather student page view activity data is available at: github/jingmayer/garden

First, let’s prepare the data set and include the following fields:

  • datetime: The date and time when a student accessed a Canvas course
  • user: The unique user_id for a student
  • pageview_id: The unique identifier for the data set, which is composed of an user_id and the hour of the day when a page view record was created by the user.
  • cumulative_page_views: The accumulated/aggregated count of page views for a student on a given date
  • daily_page_views: The actual daily count of page views for a student on a given date
  • total_activity_time: The total activity time that a student spent in a course when the time the data was pulled

After the data is prepared, you can build a motion chart using the data in R with googleVis package.

You may switch the options for x-axis, y-axis, color and size to observe the page view activities from a different perspective. For instance, you may switch the Color option from ‘datetime’ to ‘user’ to observe the page_view activities for the same users over time.

In our example, the y-axis corresponds to the incremental page_views accumulated for an individual up to a given date, the x-axis denotes daily page_views, and the size of each point indicates the total_activity_time (in seconds) that an individual spent in the course. Each point corresponds to an unique pageview_id, which contains two parts: The user and the hour of the day the record was created.

For instance, I would like to identify some self-motivated participants in a course, as I was playing the motion chart, I noticed that user ‘5105899’ had accessed the course and views many pages on a number of days. The screenshot of a motion chart below demonstrates: The point, marked as ‘510589911’, represents user ‘5105899’ viewed 10 pages at 11:00am on Oct.1, 2016, and by 11:00am on Oct.1, the user ‘5105899’ viewed 394 pages in total. The information shows that user ‘510589’ (the greenish points) accessed the course and viewed a number of pages on Oct.1, 2016 (circled in green).  The user ‘510589’ generated the highest number of cumulative_page_views compare to his/her peers who also accessed the course on that day. motionchartFurthermore, I am curious to see what the page view activities for self-motivated participants, such as user ‘5105899’, look like over time, did they access the course on a regular basis? Did they spend similar amount of time in the course? You may switch the motion graph type (iconType: Bubble, Bar and Line) to gain a different perspective on the same set of data.

motionchart2Reference:

Using network analysis to visualize online discussion interaction

In this blog, we will talk about using a user script to harvest Canvas discussion data, loading the data to a Rstudio Shiny app that employs network analyses to analyze and visualize student discussion interactions. Instructors may leverage the visualizations to make an informed a decision on discussion facilitation and student group arrangements.

DT1 DT2

A user script is a script that runs in a Web browser to add some functionality to a web page. The user script we are going to use is to add a ‘get discussion data’ feature on a Canvas course discussion page. To use a user script you need to first install a user script manager. For Firefox, the best choice is Greasemonkey. For Chrome: Tampermonkey. Once you’ve installed a user script manager, click on the get discussion data user script, and click on the Install button. The script is then installed and will run in a Canvas course site it applies to.

Open a Canvas course that contains discussion activities, click on the ‘Discussions’ navigation tab, scroll down to the bottom of the discussion page, click on the ‘Get Discussion Entries’ tab, select “Generate one file with interactions”, and save the data in a csv file format.

If you open the file in a text editor, it should appear like the following format:

from,to,weight,group
Stu1,Stu2,511,one
…….,…….,398,one
…….,…….,484,one
Stu2,Stu1,66,two
…….,…….,680,two
…….,…….,691,two

Each of the four column headers refers to:
reply_author – from, initial_entry_author – to, reply_word_count – weight, topic_id – group

Open the networkgraph app, load the csv file to the app. The discussion data is now presented in directed weighted network diagrams.

  • The community detection: edge.betweeness.community algorithm is used to detect groups that consist of densely connected nodes with fewer connections across groups.
  • The degree of a node: In-degree and out-degree are used to measure the direct ties for each node. Each node represents a student, and the size of an node corresponds to the quantity of interactions associated with the node.
  • The weight of a directed edge: The directed connection corresponds to the direction of an immediate interaction, and the thickness of a link corresponds to the length of an interaction.
  • The degree of a node (size of an orange circle) corresponds to the quantity of interactions associated with the node. You may adjust the size of nodes using the slider (control widget).
  • The weight of an edge (thickness of a directed link) represents the length of an interaction, in this case, it corresponds to the word count of a reply message. You may adjust the thickness of links using the weight control widget.
  • “edge.betweeness.community algorithm” – an approach to community detection in social network is applied to detect groups that consist of densely connected nodes with fewer connections across groups.
  • You can select a group to examine student discussion interactions within the network.
  • You can select an individual student and examine his/her discussion activities in relationship to overall discussion interactions.

The app has additional features besides the network diagrams that instructors can leverage.

    • Data_Summary:
      • You can search by a student name and locate all interactions associated with the student.
      • You can sort the data by the weight (the word count) of an interaction.
      • You can use the Plot feature to examine the weight of an edge or the degree of a node for a student in relation to his/her peers.
    • Matrix:
      • You can search by a student name and find out the total number of interactions and the total word counts of all the interactions associated with the student.
      • You can sort the data by the number of interactions or total word counts to identify the most ‘influential’ or ‘active’ nodes.
      • You can also download the matrix for further statistical analysis.

matrix

Sankey diagram and content design (R givsSankey)

In previous blog, I mentioned about the application of Sankey diagram in course design, and included an example of course access flow chart that was built in Tableau.

In this blog, I built an user flow diagram in R using givsSankey package to visualize student action (participate or view) on assignments, for instance, quizzes and discussions.

In a self-paced open online course, I would like to find out the disparity in number of attempts between previewing a quiz and taking the quiz (clicked on submit button); And the difference in action between reviewing discussion threads and participating a discussion.

Student content access raw data was gathered and used to build a visualization. Below is the visualization of student actions on quizzes and discussions presented in a Sankey chart. The chart was built in R using givsSankey package.

Chart 1: The width of grey line indicates the total count of students who participated or viewed an object. The length of quizzes and topics bar represents the total count of students who took an action on the object. The length of each bar on the right denotes the total count of students who either participated or viewed the item.

The visualization suggests that for discussion topic-“Content Engagement” (where the red arrows point), students tend to click through a discussion page rather than posting or replying a thread, which prompted me to examine the topic description and rephrase it.

participation

To build a graph like this, first, we need to prepare a file contains the elements that you would like to examine. In this example, I gathered user page_view data in an open online course that include the following fields, and saved it in a csv file format:

  • UserID is the unique student id.
  • Category includes the content and feature that a student viewed, like announcements, assignments, grades, home, modules, quizzes, roster, topics, and wiki.
  • Class includes classification for each Category like announcement, assignment, attachment, discussion_topic, quizzes/quiz, etc.
  • Title is the name of the content.

You must install the packages in R before use them, and you only need to do it once:
install.packages(googleVis)
install.packages(sqldf)
#Require/Call the packages
library(googleVis)
library(sqldf)
#Load the file
pageview=read.csv(“pageview.csv”, header=TRUE)
#Manipulate the data
Sankey <- sqldf(“select Category, Action, count(UserID) as Weight from pageview where (Class not in (”) and Category in (‘quizzes’,’topics’)) group by 1,2
UNION ALL
select Action, title, count(UserID) as Weight from edge where (Class not in (”) and Category in (‘quizzes’,’topics’)) group by 1,2″)
#Draw the diagram
plot(gvisSankey(Sankey, from=”Action”, to=”Category”, weight=”Weight”,
options=list(height=700, width=650,
sankey=”{
link:{color:{fill: ‘lightgray’, fillOpacity: 0.7}},
node:{nodePadding: 5, label:{fontSize: 9}, interactivity: true, width: 30},
}”)
)
)

Sankey diagram and course design (Tableau)

A Sankey diagram is commonly used to visualize the relationships and flows between multiple elements. Being inspired by the blogs on Sankey Charts in Tableau, I made an attempt to build one using student page_views data that was gathered in a MOOC course. The diagram shows course participants content access flow and potentially suggests certain patterns.

User-ContentThe diagram was built with two data points that are included in a page_view object:

  • user_id: a course participant that clicked on a course object(page, tab, menu, link, etc.)
  • content type: the type of a content object that an user clicked on

Steps:

  1. preparing the data file: user_id, content_type, RowType (‘original’ or ‘duplicates’)
  2. create a new field [ToPad] based on ‘RowType’:
    if [RowType]=='original' then 1 else 49 end
  3. create a new Bin of Size 1 called [Padded]
  4. create a third function [t]:
    (index()-25)/4
  5. build functions that will show our data at the right points vertically when we build the Sankey, these are identical:
    [Rank 1] = RUNNING_SUM(COUNTD(user_id))/TOTAL(COUNTD(user_id))
    [Rank 2] = RUNNING_SUM(COUNTD(user_id))/TOTAL(COUNTD(user_id))
  6. start with a sigmoid function – the basis of the Viz (that gives the curve) [sigmoid]:
    1/(1+EXP(1)^-[t])
  7. create the curve [Curve]:
    [Rank 1]+(([Rank 2] - [Rank 1])*[Sigmoid])

Resources: http://www.theinformationlab.co.uk/2015/03/04/sankey-charts-in-tableau/

Leveraging Quiz Submissions Data to Inform Quiz Design

The visualization of quiz submission data can help faculty make an informed decision on quiz design. For instance, the analysis of quiz submission data can inform faculty of the difficulty level of quizzes. Faculty can use the information to select a set of quizzes that are neither too difficult nor too easy. Faculty can also use the information to select quizzes that are best for pre and post assessment. If quizzes are set to allow multiple attempts, we can leverage quiz attempts data to identify students who might struggle with a given topic.

Graph 1 shows the number of attempts that students took to get a full score for a given quiz. The graph indicates that it took all students only one attempt to get a perfect score for Quiz_2. In comparison, it took quite a few students two, three, or even four attempts to achieve a full score for Quiz_1. This type of visualization allows faculty to identify the most difficult quiz (Quiz.1) and an easier quiz (Quiz.2). Quiz.3 appears to be neither too difficult nor too easy.

graph 1: Attempt_1 to Attempt_4 sectors represent the attempt(s) that students took to get a full score (attempt sector). Quiz.1 to Quiz.3 sector represent the quizzes (quiz sector). The width of quiz sector denotes the number of attempts made to a given quiz. The width of the attempt sector denotes the count of students who made the attempt. The thickness of directional link from a student to a quiz represents the quantity of attempts.

StuQuizAttempt

The graph 2 below is another way to present the information. This visualization allows faculty to identify the students who struggle with a topic. For instance, student 1 seems to have difficulty understanding the content that Quiz.1 and Quiz.5 are designed to assess.

graph 2: S1 and S2 sector represent student one and student two (student sector). Quiz.1 to Quiz.7 sector represent seven quizzes (quiz sector). The width of quiz sector denotes the attempts to a quiz made by all students. The width of the student sector denotes the attempts to all quizzes made by a student. The thickness of directional link from a student to a quiz represents the quantity of attempts.
QuizAttempts

Below includes the sample matrix that I used to generate graph 2 in R. The x-axis represents seven quizzes, and the y-axis represents 12 students who took the quizzes. The value in each cell denotes that number of attempts that a student took to a quiz until got a full score.

Quiz 1 Quiz 2 Quiz 3 Quiz 4 Quiz 5 Quiz 6 Quiz 7
S1 6 1 2 1 4 1 2
S2 1 1 2 2 4 1 3
S3 2 1 2 1 4 1
S4 2 1 2 2 5 1 1
S5 3 1 2 2 3 1 1
S6 2 1 1 1 3 1
S7 2 1 2 2 1 1 1
S8 1 1 2 1 1 1 1
S9 2 2 3 4 1
S10 2 1 2 2 4 1 2
S11 2 1 1 2 3 1 1
S12 2 1 2 2 3 1

R code:
order=c(“S1″,”S2″,”quiz.1″,”quiz.2″,”quiz.3″,”quiz.4″,”quiz.5″,”quiz.6″,”Quiz.7”)
grid.col=c(“aquamarine4”, “cadetblue4”, “dimgrey”, “dimgrey”, “dimgrey”, “dimgrey”, “dimgrey”, “dimgrey”, “dimgrey”)
circos.par(gap.degree=c(rep(2,nrow(mat2)-1),20,rep(2,ncol(mat2)-1),20))
chordDiagram(mat2,column.col=1:7,grid.col=grid.col)
circos.clear()