This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Communication researchers solve different problems of producing different kinds of results and give appropriate evidence to validate these results. They often report their research in conference papers. We analyzed the abstracts of research papers submitted to All India Seminar on Information and Communication Technology for Rural Development (ICTRD 2009) in order to identify the types of research reported in the submitted and accepted papers, and observed the program committee discussions about which the papers were accepted. We present the research paradigms of the papers, common concerns of the program committee, and statistics on success rates. This information should help researchers design better research projects and write papers that present their results to best advantage.
Keywords: research design, research paradigms, validation, technical writing
In communication engineering, research papers are customary vehicles for reporting results to the research community. In a research paper, the author explains to an interested reader what he or she accomplished, and how the author accomplished it, and why the reader should care . A good research paper should answer a number of questions:
What, precisely, was your contribution?
What question did you answer?
Why should the reader care?
What is your new result?
What new knowledge have you contributed that the reader can use elsewhere?
What previous work (yours or someone else's) do you build on? What do you provide a superior alternative to?
How is your result different from any better than this prior work?
Why should the reader believe your result?
What standard should be used to evaluate your claim?
What concrete evidence shows that your result satisfies your claim?
The result is well communicated if the above questions are well answered. If the result gives interesting and significant contribution to our knowledge of communication engineering, you'll have a good chance of getting it accepted for publication in a conference or journal.
This paper examines how communication engineers answer the questions above, with emphasis on the design and organization of the research paper and research reports. Very concretely, the examples here come from the papers submitted to ICTRD 2009 and the program committee review of those papers.
2. What, precisely, was your contribution and what kinds of questions do communication engineers investigate?
Before reporting what you did, explain what problem you set out to solve or what question you set out to answer and why this is important. Generally speaking, communication / software engineering researchers seek better ways to develop and evaluate communication, networking and software. Table 1 lists the types of research questions that are asked by engineering research papers and provides specific question templates.
Table 1. Types of software engineering research questions
Type of question
Method or means of
How can we do/create/modify/evolve (or automate doing) X?
What is a better way to do/create/modify/ evolve X?
Method for analysis
How can I evaluate the quality/correctness of X?
How do I choose between X and Y?
Design, evaluation, or
analysis of a
How good is Y? What is property X of artifact/method Y?
What is a (better) design, implementation, maintenance, or adaptation for application X?
How does X compare to Y?
What is the current state of X / practice of Y?
Given X, what will Y (necessarily) be?
What, exactly, do we mean by X? What are its important characteristics?
What is a good formal/empirical model for X?
What are the varieties of X, how are they related?
Feasibility study or
Does X even exist, and if so what is it like?
Is it possible to accomplish X at all?
The first two types of research produce methods of development or of analysis that the authors investigated in one setting, but that can presumably be applied in other settings. The third type of research deals explicitly with some particular system, practice, design or other instance of a system or method; these may range from narratives about industrial practice to analytic comparisons of alternative designs. For this type of research the instance itself should have some broad appeal. Finally, papers that deal with an issue in a completely new way are sometimes treated differently from papers that improve on prior art, so "feasibility" is a separate category.
2.1 Which of these are most common?
The most common kind of ICTRD paper reports an improved method or means of developing communication techniques- that is, of designing, implementing, evolving, maintaining, or otherwise operating on the system itself. Papers addressing these questions dominate both the submitted and the accepted papers. Also fairly common are papers about methods for reasoning about communication systems, principally analysis of correctness (testing and verification). Feasibility study papers have a modest acceptance edge in this very selective conference.
Table 2 gives the distribution of submissions to ICTRD 2009, based on reading the abstracts (not the full papers- but remember that the abstract tells a reader what to expect from the paper). For each type of research question, the table gives the number of papers submitted and accepted the percentage of the total paper set of each kind, and the acceptance ratio within each type of question. Figures 1 and 2 show these counts and distributions.
Table 2. Types of research questions represented in ICTRD 2009 submissions and acceptances
Type of question
Method or means of development
Method for analysis or evaluation
Design, evaluation or analysis of a particular instance
Generalization or characterization
Feasibility study or exploration
Fig 1: Counts of acceptances and rejections by Fig 2: Distribution of acceptances and type of research question rejections by type research question
2.2 What do program committees look for?
The program committee always looks for a clear statement of the specific problem you solved and an explanation of how the answer will help solve an important engineering problem instead of devoting most of the paper to describing your result. You should begin by explaining what question you're answering and why the answer matters.
3. What is your new result and what kinds of validation do engineers do?
In any research paper explain precisely what you have contributed to the store of engineering knowledge and how this is useful beyond your own project. Engineering research contribution may be procedures or techniques for development or analysis; they may be models that generalize from specific examples. Engineers offer several kinds of evidence in support of their research results which are essential to select a form of validation that is appropriate for the type of research result and the method used to obtain the result.
3.1. Which of these are most common and what do program committees look for?
In ICTRD 2009 abstracts most successful kinds of validation were based on analysis and real-world experience. Well-chosen examples were also successful. Persuasion was not persuasive, and narrative evaluation was only slightly more successful.
The program committee looks for solid evidence to support your result. It's not enough that your idea works for you; there must also be evidence that the idea or the technique will help someone else as well.
Here are some examples, with advice for staying out of trouble :
If you claim to improve on prior art, compare your result objectively to the prior art.
If you used an analysis technique, follow the rules of that analysis technique. If the technique is not a common one in software / communication engineering (e.g., meta analysis, decision theory, user studies or other behavioral analyses), explain the technique and standards of proof, and be clear about your adherence to the technique.
If you offer practical experience as evidence for your result, establish the effect your research has. If at all possible, compare similar situations with and without your result.
If you performed a controlled experiment, explain the experimental design. What is the hypothesis? What is the treatment? What is being controlled? What data did you collect, and how did you analyze it? Are the results significant? What are the potentially confounding factors, and how are they handled? Do the conclusions follow rigorously from the experimental data?
If you performed an empirical study, explain what you measured, how you analyzed it, and what you concluded. What data did you collect, and how? How is the analysis related to the goal of supporting your claim about the result? Do not confuse correlation with causality.
If you use a small example for explaining the result, provide additional evidence of its practical use and scalability.
The program committee wants to know what is novel or exciting, and why. What, specifically, is the contribution? What is the increment over earlier work by the same authors? by other authors? Is this a sufficient increment, given the usual standards of sub-discipline? Above all, the program committee also wants to know what you actually contributed to our store of knowledge about engineering. Sure, you wrote this tool and tried it out. But was your contribution the technique that is embedded in the tool, or was it making a tool that's more effective than other tools that implements the technique.
4. Does the abstract matter?
The abstracts of papers submitted to ICTRD -2009 convey a sense of the kinds of research submitted to the conference. Some abstracts were easier to read and (apparently) more informative than others. Many of the clearest abstracts had a common structure:
Two or three sentences about the current state of the art, identifying a particular problem
One or two sentences about what this paper contributes to improving the situation
One or two sentences about the specific result of the paper and the main idea behind it
A sentence about how the result is demonstrated or defended Abstracts in roughly this format often explained clearly what readers could expect in the paper.
Acceptance rates were highest for papers whose abstracts indicate that analysis or experience provides evidence in support of the work. Decisions on papers were made on the basis of the whole papers, of course, not just the abstracts-but it is reasonable to assume that the abstracts reflect what's in the papers.
Whether you like it or not, people judge papers by their abstracts and read the abstract in order to decide whether to read the whole paper. It's important for the abstract to tell the story. Don't assume, though, that simply adding a sentence about analysis or experience to your abstract is sufficient; the paper must deliver what the abstract promises writing a good systems paper .
This work depended critically on access to the entire body of submitted papers for the ICTRD 2009 conference , which would not have been possible without the cooperation and encouragement of the ICTRD 2009 program committee. The development of these ideas has also benefited from discussion with the ICTRD 2009 program committee members, with colleagues and in open discussion sessions during the Conference.