어떻게 코드에 중복이 생기는가?

Posted by epicdev Archive : 2011. 10. 1. 12:17
강요된 중복: 개발자들은 다른 선택이 없다고 느낀다. 환경이 중복을 요구하는 것처럼 보인다.

부주의한 중복: 개발자들은 자신들이 정보를 중복하고 있다는 것을 깨닫지 못한다.

참을성 없는 중복: 중복이 쉬워 보이기 때문에 개발자들이 게을러져서 중복을 하게 된다.

개발자간의 중복: 한 팀에 있는 (혹은 다른 팀에 있는) 여러 사람들이 동일한 정보를 중복한다. 


실용주의프로그래머
카테고리 컴퓨터/IT > 프로그래밍/언어
지은이 앤드류 헌트 (인사이트, 2007년)
상세보기
 

'Archive' 카테고리의 다른 글

코드의 직교성의 장점  (0) 2011.10.01
코드내의 문서화  (0) 2011.10.01
언제 멈춰야 할지 알라  (0) 2011.10.01
리소스가 없다고 자꾸 뜰 경우  (0) 2011.09.29
소프트웨어에 버전을 매기는 방법  (0) 2011.09.25
  

언제 멈춰야 할지 알라

Posted by epicdev Archive : 2011. 10. 1. 10:47
어떤 면에서 프로그래밍은 그림 그리기와 유사하다. 깨끗한 캔버스와 몇 가지 기본 재료를 갖고 시작한다. 과학, 예술 그리고 기술을 조합해서 그것들로 뭘 할지 결정한다. 전체 그림을 스케치하고 주변 환경을 칠한 다음, 세부 내용을 채워 넣는다. 자신이 한 것을 비판적인 눈으로 보기 위해 늘 뒤로 물러서서 보기도 한다. 때로는 캔버스를 버리고 완전히 새로 시작하기도 한다.

하지만 예술가들은 여러분에게 언제 멈춰야 할지를 알지 못하면 이 모든 고된 작업을 망치게 될 거라고 말해 준다. 칠한 위에 덧칠하고, 세부묘사 위에 다시 세부묘사를 하다보면, 그림은 물감 속에서 사라진다.

완벽하게 훌륭한 프로그램을 과도하게 장식하거나 지나칠 정도로 다듬느라 망치지 말라. 그냥 넘어가고 코드가 현재 상태에서 한동안은 그대로 있도록 놓아두라. 완벽하지 않을 수도 있다. 걱정하지 마라. 완벽해지기란 불가능하다.

 
실용주의프로그래머
카테고리 컴퓨터/IT > 프로그래밍/언어
지은이 앤드류 헌트 (인사이트, 2007년)
상세보기
  

리소스가 없다고 자꾸 뜰 경우

Posted by epicdev Archive : 2011. 9. 29. 09:57
안드로이드 작업 할 때 xml이나 기타 리소스 들을 수정하고 나서
곧바로 실행을 시키면 리소스가 없다고 뜨거나
xml 수정한 사항이 적용이 되지 않을 경우가 종종 발생한다.
그럴땐 eclipse에서 프로젝트를 clean 해 준다음 실행하면 잘 된다.
  

소프트웨어에 버전을 매기는 방법

Posted by epicdev Archive : 2011. 9. 25. 09:28
출처: http://en.wikipedia.org/wiki/Software_versioning

 Software versioning
From Wikipedia, the free encyclopedia

Software versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software. Within a given version number category (major, minor), these numbers are generally assigned in increasing order and correspond to new developments in the software. At a fine-grained level, revision control is often used for keeping track of incrementally different versions of electronic information, whether or not this information is actually computer software.

Contents

 [hide]

[edit]Schemes

A variety of version numbering schemes have been created to keep track of different versions of a piece of software. The ubiquity of computers has also led to these schemes being used in contexts outside computing.

[edit]Sequence-based identifiers

Version number sequence

In sequence-based software versioning schemes, each software release is assigned a unique identifier that consists of one or more sequences of numbers or letters. This is the extent of the commonality, however, schemes vary widely in areas such as the quantity of sequences, the attribution of meaning to individual sequences, and the means of incrementing the sequences.

[edit]Change significance

In some schemes, sequence-based identifiers are used to convey the significance of changes between releases: changes are classified by significance level, and the decision of which sequence to change between releases is based on the significance of the changes from the previous release, whereby the first sequence is changed for the most significant changes, and changes to sequences after the first represent changes of decreasing significance.

For instance, in a scheme that uses a four-sequence identifier, the first sequence may be incremented only when the code is completely rewritten, while a change to the user interface or the documentation may only warrant a change to the fourth sequence.

This practice permits users (or potential adopters) to evaluate how much real-world testing a given software release has undergone. If changes are made between, say, 1.3rc4 and the production release of 1.3, then that release, which asserts that it has had a production-grade level of testing in the real world, in fact contains changes which have not necessarily been tested in the real world at all.[clarification needed] This approach commonly permits the third level of numbering ("change"), but does not apply this level of rigor to changes in that number: 1.3.1, 1.3.2, 1.3.3, 1.3.4... 1.4.1, etc.[clarification needed]

In principle, in subsequent releases, the major number is increased when there are significant jumps in functionality, the minor number is incremented when only minor features or significant fixes have been added, and the revision number is incremented when minor bugs are fixed. A typical product might use the numbers 0.9 (for beta software), 0.9.1, 0.9.2, 0.9.3, 1.0, 1.0.1, 1.0.2, 1.1, 1.1.1, 2.0, 2.0.1, 2.0.2, 2.1, 2.1.1, 2.1.2, 2.2, etc. Developers have at times jumped (for example) from version 5.0 to 5.5 to indicate significant features have been added, but they are not enough to warrant incrementing the major version number. This is improper. It is usually done to create a visual differential between software versions. A person may be less inclined to go through the trouble of installing, reinstalling, and/or removing old versions of software if a minor change is made instead. (I.E. Version 5.0 to 5.01, or 5.0 to 5.1)

A different approach is to use the major and minor numbers, along with an alphanumeric string denoting the release type, i.e. "alpha", "beta" or "release candidate". A release train using this approach might look like 0.5, 0.6, 0.7, 0.8, 0.9 == 1.0b1, 1.0b2 (with some fixes), 1.0b3 (with more fixes) == 1.0rc1 (which, if it is stable enough) == 1.0. If 1.0rc1 turns out to have bugs which must be fixed, it turns into 1.0rc2, and so on. The important characteristic of this approach is that the first version of a given level (beta, RC, production) must be identical to the last version of the release below it: you cannot make any changes at all from the last beta to the first RC, or from the last RC to production. If you do, you must roll out another release at that lower level.

However, since version numbers are human-generated, not computer-generated, there is nothing that prevents arbitrary changes that violate such guidelines: for example, the first sequence could be incremented between versions that differ by not even a single line of code, to give the (false) impression that very significant changes were made.

Other schemes impart meaning on individual sequences:

major.minor[.build[.revision]]

or

major.minor[.maintenance[.build]]

Again, in these examples, the definition of what constitutes a "major" as opposed to a "minor" change is entirely arbitrary and up to the author, as is what defines a "build", or how a "revision" differs from a "minor" change.

A similar problem of relative change significance and versioning nomenclature exists in book publishing, where edition numbers or names can be chosen based on varying criteria.

In most proprietary software, the first released version of a software product has version 1.

[edit]Designating development stage

Some schemes use a zero in the first sequence to designate alpha or beta status for releases that are not stable enough for general or practical deployment and are intended for testing or internal use only.

It can be used in the third position:

  • 0 for alpha (status)
  • 1 for beta (status)
  • 2 for release candidate
  • 3 for (final) release

For instance:

  • 1.2.0.1 instead of 1.2-a1
  • 1.2.1.2 instead of 1.2-b2 (beta with some bug fixes)
  • 1.2.2.3 instead of 1.2-rc3 (release candidate)
  • 1.2.3.0 instead of 1.2-r (commercial distribution)
  • 1.2.3.5 instead of 1.2-r5 (commercial distribution with many bug fixes)

[edit]Separating sequences

When printed, the sequences may be separated with characters. The choice of characters and their usage varies by scheme. The following list shows hypothetical examples of separation schemes for the same release (the thirteenth third-level revision to the fourth second-level revision to the second first-level revision):

  • A scheme may use the same character between all sequences: 2.4.13, 2/4/13, 2-4-13
  • A scheme choice of which sequences to separate may be inconsistent, separating some sequences but not others: 2.413
  • A scheme's choice of characters may be inconsistent within the same identifier: 2.4_13

When a period is used to separate sequences, it does not represent a decimal point, and the sequences do not have positional significance. An identifier of 2.5, for instance, is not "two and a half" or "half way to version three", it is the fifth second-level revision of the second first-level revision.

[edit]Number of sequences

There is sometimes a fourth, unpublished number which denotes the software build (as used by Microsoft). Adobe Flash is a notable case where a 4-part version number is indicated publicly, as in 10.1.53.64. Some companies also include the build date. Version numbers may also include letters and other characters, such as Lotus 1-2-3 Release 1a.

[edit]Incrementing sequences

There are two schools of thought regarding how numeric version numbers are incremented: Most free software packages treat numbers as a continuous stream, therefore a free software or open source product may have version numbers 1.7.0, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.11.2, etc. An example of such a software package is MediaWiki. However, many programs treat version numbers in another way, generally as decimal numbers, and may have version numbers such as 1.7, 1.8, 1.81, 1.82, 1.9, etc. In software packages using this way of numbering 1.81 is the next minor version after 1.8. Maintenance releases (i.e. bug fixes only) would generally be denoted as 1.81a, 1.81b, etc.

The standard GNU version numbering scheme is major.minor.revision, but emacs is a notable example using another scheme where the major number ("1") was dropped and a "user site" revision was added which is always zero in original emacs packages but increased by distributors.[1]Similarly, Debian package numbers are prefixed with an optional "epoch", which is used to allow the versioning scheme to be changed.[2]

[edit]Using negative numbers

There exist some projects that use negative version numbers. One example is the smalleiffel compiler which started from -1.0 and counted upwards to 0.0.[1]

[edit]Degree of compatibility

Some projects use the major version number to indicate incompatible releases. Two examples are Apache APR[3] and the FarCry CMS.[4]

[edit]Date

The Wine project used a date versioning scheme, which uses the year followed by the month followed by the day of the release; for example, "Wine 20040505". Wine is now on a "standard" release track; the most current stable version (as of 2010) is 1.2. Ubuntu Linux uses a similar versioning scheme—Ubuntu 10.10, for example, was released October 2010.

When using dates in versioning, for instance, file names, it is common to use the ISO scheme[5]: YYYY-MM-DD, as this is easily string sorted to increasing/decreasing order. The hyphens are sometimes omitted.

Microsoft Office build numbers are actually an encoded date.[6]

[edit]Year of release

Other examples that identify versions by year include Adobe Illustrator 88 and WordPerfect Office 2003. When a date is used to denote version, it is generally for marketing purposes, and an actual version number also exists. For example, Microsoft Windows 2000 Server is internally versioned as Windows NT 5.0 ("NT" being a reference to the original product name).

[edit]Alphanumeric codes

Examples:

[edit]TeX

TeX has an idiosyncratic version numbering system. Since version 3, updates have been indicated by adding an extra digit at the end, so that the version number asymptotically approaches π; this is a form of unary numbering – the version number is the number of digits. The current version is 3.1415926. This is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. TeX developer Donald Knuth has stated that the "absolutely final change (to be made after my death)" will be to change the version number to π, at which point all remaining bugs will become permanent features.[7]

In a similar way, the version number of METAFONT asymptotically approaches e.

[edit]Apple

Apple has a formalised version number structure based around the NumVersion struct, which specifies a one- or two-digit major version, a one-digit minor version, a one-digit "bug" (i.e. revision) version, a stage indicator (drawn from the set development/prealpha, alpha, beta and final/release), and a one-byte (i.e. having values in the range 0–255) pre-release version, which is only used at stages prior to final. In writing these version numbers as strings, the convention is to omit any parts after the minor version whose value are zero (with "final" being considered the zero stage), thus writing 1.0.2b12, 1.0.2 (rather than 1.0.2f0), and 1.1 (rather than 1.1.0f0).

[edit]Other schemes

Some software producers use different schemes to denote releases of their software. For example, the Microsoft Windows operating system was first labelled with standard numerical version numbers (Windows 1.0 through Windows 3.11). Later, Microsoft started using separate version names for marketing purposes, first using years (Windows 95 (4.0), Windows 98 (4.10), Windows 2000 (5.0)), then using alphanumeric codes (Windows Me (4.90), Windows XP (5.1)), then using brand names (Windows Vista (6.0)). With the release of Windows 7 it appears that Microsoft has returned to using numerical version numbers, although the official version number for Windows 7 is 6.1.[8]

The Debian project uses a major/minor versioning scheme for releases of its operating system, but uses code names from the movie Toy Story during development to refer to stable, unstable and testing releases.

[edit]Internal version numbers

Software may have an "internal" version number which differs from the version number shown in the product name (and which typically follows version numbering rules more consistently). Java SE 5.0, for example, has the internal version number of 1.5.0, and versions of Windows from NT 4 on have continued the standard numerical versions internally: Windows 2000 is NT 5.0, XP is Windows NT 5.1, Windows Server 2003 is NT 5.2, Vista is NT 6.0 and 7 is NT 6.1. Note, however, that Windows NT is only on its third major revision, as its first release was numbered 3.1 (to match the then-current Windows release number).

[edit]Pre-release versions

In conjunction with the various versioning schemes listed above, a system for denoting pre-release versions is generally used, as the program makes its way through the stages of the software release life cycle. Programs that are in an early stage are often called "alpha" software, after the first letter in the Greek alphabet. After they mature but are not yet ready for release, they may be called "beta" software, after the second letter in the Greek alphabet. Generally alpha software is tested by developers only, while beta software is distributed for community testing. Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a final "1.0" release. However, if the pre-release version is for an existing software package (e.g. version 2.5), then an "a" or "alpha" may be appended to the version number. So the alpha version of the 2.5 release might be identified as 2.5a or 2.5.a. Software packages which are soon to be released as a particular version may carry that version tag followed by "rc-#", indicating the number of the release candidate. When the version is actually released, the "rc" tag disappears.

This can apparently cause trouble for some package managers, though. The Rivendell radio broadcast automation package, for example, is about[when?] to have to release its first full production release package... v1.0.1, because if they called it v1.0.0, RPM would refuse to install it, because the algorithm sorts "1.0.0" lower than "1.0.0rc2" (which is because version comparison algorithms are generally language-agnostic and thus don't know the meaning of "rc").

[edit]Modifications to the numeric system

[edit]Odd-numbered versions for development releases

Between the 1.0 and the 2.6.x series, the Linux kernel used odd minor version numbers to denote development releases and even minor version numbers to denote stable releases; see Linux kernel: Version numbering. For example, Linux 2.3 was a development family of the second major design of the Linux kernel, and Linux 2.4 was the stable release family that Linux 2.3 matured into. After the minor version number in the Linux kernel is the release number, in ascending order; for example, Linux 2.4.0 → Linux 2.4.22. Since the 2004 release of the 2.6 kernel, Linux no longer uses this system, and has a much shorter release cycle, instead now simply incrementing the third number, using a fourth number as necessary.

The same odd-even system is used by some other software with long release cycles, such as GNOME.

[edit]Apple

Apple had their own twist on this habit during the era of the classic MacOS: although there were minor releases, they rarely went beyond 1, and when they did, they twice jumped straight to 5, suggesting a change of magnitude intermediate between a major and minor release (thus, 8.5 really means 'eight and a half', and 8.6 is 'eight and a half point one'). The complete sequence of versions (neglecting revision releases) is 1.0, 1.1, 2.0, 2.1, 3.0, 3.2 (skipping 3.1), 4.0, 4.1, 5.0, 5.1, 6.0, 7.0, 7.1, 7.5, 7.6, 8.0, 8.1, 8.5, 8.6, 9.0, 9.1, 9.2.

Mac OS X has departed from this trend, having gone more conventionally from 10.0 to 10.7, one minor release at a time. However, note that the 10.4.10 update does not follow the previously-indicated approach of having a "one- or two-digit major version, a one-digit minor version, a one-digit 'bug' (i.e. revision) version…". The bug-fix value is not a decimal indicator, but is an incremental whole value; while it is not expected, there would be nothing preventing a distant-future "X.4.321" release.

[edit]Political and cultural significance of version numbers

[edit]Version 1.0 as a milestone

Proprietary software developers often start at version 1 for the first release of a program and increment the major version number with each rewrite. This can mean that a program can reach version 3 within a few months of development, before it is considered stable or reliable.

In contrast to this, the free-software community tends to use version 1.0 as a major milestone, indicating that the software is "complete", that it has all major features, and is considered reliable enough for general release.

In this scheme, the version number slowly approaches 1.0 as more and more bugs are fixed in preparation for the 1.0 release. The developers of MAME do not intend to release a version 1.0 of their emulator program.[citation needed] The argument is that it will never be truly "finished" because there will always be more arcade games. Version 0.99 was simply followed by version 0.100 (minor version 100 > 99). In a similar fashion Xfire 1.99 was followed by 1.100. After over 8 years of development, eMule just recently reached version 0.50a.

[edit]To describe program history

Winamp released an entirely different architecture for version 3 of the program. Due to lack of backwards compatibility with plugins and other resources from the major version 2, a new version was issued that was compatible with both version 2 and 3. The new version was set to 5 (2+3), skipping version 4. The developers also humorously joked that they skipped version 4 because "nobody wants to see a Winamp 4 skin", referencing the foreskin of a penis.[9]

A similar thing happened with UnixWare 7, which was the combination of UnixWare 2 and OpenServer 5.

[edit]Keeping up with competitors

There is a common habit in the proprietary software industry to make major jumps in numeric major or minor version numbers for reasons which do not seem (to many members of the program's audience) to merit the "marketing" version numbers.

This can be seen in several Microsoft and America Online products, as well as Sun Solaris and Java Virtual Machine numbering, SCO Unix version numbers, and Corel WordPerfect, as well as the filePro DB/RAD programming package, which went from 2.0 to 3.0 to 4.0 to 4.1 to 4.5 to 4.8 to 5.0, and is about to go to 5.6, with no intervening release. A slightly different version can be seen in AOL's PC client software, which tends to have only major releases (5.0, 6.0, 7.0, etc.). Likewise, Microsoft Access jumped from version 2.0 to version 7.0, to match the version number of Microsoft Word.

Microsoft has also been the target of 'catch-up' versioning, with the Netscape browser skipping version 5 to 6, in line with Microsoft's Internet Explorer, but also because the Mozilla application suite inherited version 5 in its user agent string during pre-1.0 development and Netscape 6.x was built upon Mozilla's code base.

Sun's Java has at times had a hybrid system, where the actual version number has always been 1.x but three times has been marketed by reference only to the x:

  • JDK 1.0.3
  • JDK 1.1.2 through 1.1.8
  • J2SE 1.2.0 ("Java 2") through 1.4.2
  • Java 1.5.0 ("Java 5")
  • Java 1.6.0 ("Java 6")

Sun also dropped the first digit for Solaris, where Solaris 2.8 (or 2.9) is referred to as Solaris 8 (or 9) in marketing materials.

Another example of keeping up with competitors is when Slackware Linux jumped from version 4 to version 7 in 1999.[10]

[edit]Superstition

  • The Office 2007 release of Microsoft Office has an internal version number of 12. The next version Office 2010 has an internal version of 14, due to superstitions surrounding the number 13.[11]
  • Corel's WordPerfect Office, version 13 is marketed as "X3" (Roman number 10 and "3"). The procedure has continued into the next version, X4. The same has happened with Corel's Graphic Suite (i.e. CorelDRAWCorel Photo-Paint) as well as its Video editing software "Video Studio".
  • Nokia decided to jump directly from S60 3rd Edition to S60 5th Edition, skipping the fourth edition due to the tetraphobia of their Asian customers.
  • ABBYY Lingvo Dictionary uses numbering 12, x3 (14), x5 (15).

[edit]Geek culture

[edit]Overcoming perceived marketing difficulties

In the mid-1990s, the rapidly growing CMMSMaximo, moved from Maximo Series 3 directly to Series 5, skipping Series 4 due to that number's perceived marketing difficulties in the Chinese market, where pronunciation of the number 4 () in Chinese rhymes with “death” or “failure”. This did not, however, stop Maximo Series 5 version 4.0 being released. (It should be noted the "Series" versioning has since been dropped, effectively resetting version numbers after Series 5 version 1.0's release.)

[edit]Significance in software engineering

Version numbers are used in practical terms by the consumer, or client, by being able to compare their copy of the software product against another copy, such as the newest version released by the developer. For the programmer team or company, versioning is often used on a file-by-file basis, where individual parts or sectors of the software code are compared and contrasted with newer or older revisions, often in a collaborative version control system. There is no absolute and definite software version schema; it can often vary from software genre to genre, and is very commonly based on the programmer's personal preference.

[edit]Significance in technical support

Version numbers allow people providing support to ascertain exactly what code a user is running, so that they know what bugs might affect a problem, and the like. This occurs when a program has a substantial user community, especially when that community is large enough that the people providing technical support are not the people who wrote the code.

[edit]Version numbers for files and documents

Some computer file systems, such as the OpenVMS Filesystem, also keep versions for files.

Versioning amongst documents is relatively similar to the routine used with computers and software engineering, where with each small change in the structure, contents, or conditions, the version number is incremented by 1, or a smaller or larger value, again depending on the personal preference of the author and the size or importance of changes made.

[edit]Version number ordering systems

Version numbers very quickly evolve from simple integers (1, 2, ...) to rational numbers (2.08, 2.09, 2.10) and then to non-numeric "numbers" such as 4:3.4.3-2. These complex version numbers are therefore better treated as character strings. Operating systems that include package management facilities (such as all non-trivial Linux or BSD distributions) will use a distribution-specific algorithm for comparing version numbers of different software packages. For example, the ordering algorithms of Red Hat and derived distributions differ to those of the Debian-like distributions.

As an example of surprising version number ordering implementation behavior, in Debian, leading zeroes are ignored in chunks, so that 5.0005 and 5.5 are considered as equal, and 5.5<5.0006. This can confuse users; string-matching tools may fail to find a given version number; and this can cause subtle bugs in package management if the programmers use string-indexed data structures such as version-number indexed hash tables.

In order to ease sorting, some software packages will represent each component of the major.minor.release scheme with a fixed width. Perl represents its version numbers as a floating-point number, for example, Perl's 5.8.7 release can also be represented as 5.008007. This allows a theoretical version of 5.8.10 to be represented as 5.008010. Other software packages will pack each segment into a fixed bit width, for example, 5.8.7 could be represented in 24 bits: ( 5 << 16 | 8 << 8 | 7; hexadecimal: 050807; for version 12.34.56 in hexadecimal: 0C2238). The floating-point scheme will break down if any segment of the version number exceeds 1,000; a packed-binary scheme employing 8 bits apiece after 256.

[edit]Use in other media

Software-style version numbers can be found in other media. In some cases the use is a direct analogy (for example, Dungeons & Dragons 3.5, where the rules were revised from the third edition, but not so much as to be considered the fourth), but more often it's used to play on an association with high technology and doesn't literally indicate a 'version' (e.g., Tron 2.0, a video game followup to the film Tron, or the television show The IT Crowd, which refers to the second season as Version 2.0). A particularly notable usage is Web 2.0, referring to the World Wide Web as used in collaborative projects such as wikis and social networking websites.

[edit]See also

[edit]References

  1. a b "Advogato: Version numbering madness". 2000-02-28. Retrieved 2009-04-11.
  2. ^ Debian Policy Manual, 5.6.12 Version
  3. ^ "Versioning Numbering Concepts - The Apache Portable Runtime Project". Retrieved 2009-04-11.
  4. ^ "Daemonite: The science of version numbering". 2004-09-14. Retrieved 2009-04-11.
  5. ^ Markus Kuhn (2004-12-19). "International standard date and time notation"University of Cambridge. Retrieved 2009-04-11.
  6. ^ Jeff Atwood (2007-02-15). "Coding Horror: What's In a Version Number, Anyway?". Retrieved 2009-04-11.
  7. ^ Donald E. Knuth. The future of TeX and METAFONT, NTG journal MAPS (1990), 489. Reprinted as chapter 30 of Digital Typography, p. 571.
  8. ^ Enter the "ver" command in Windows 7 command prompt
  9. ^ "Winamp Media Player FAQ".
  10. ^ "Slackware FAQ".
  11. ^ Paul Thurrott (2009-05-14). "Office 2010 FAQ". Retrieved 2009-12-30.

[edit]External links

  

Object를 byte array로 쓰고 읽기

Posted by epicdev Archive : 2011. 9. 24. 00:01
출처: http://scr4tchp4d.blogspot.com/2008/07/object-to-byte-array-and-byte-array-to.html
 
privimive 타입이나 java.io.Serializable 인터페이스가 구현 된 객체만 쓰고 읽을 수 있음
public byte[] toByteArray (Object obj)
{
  byte[] bytes = null;
  ByteArrayOutputStream bos = new ByteArrayOutputStream();
  try {
    ObjectOutputStream oos = new ObjectOutputStream(bos); 
    oos.writeObject(obj);
    oos.flush(); 
    oos.close(); 
    bos.close();
    bytes = bos.toByteArray ();
  }
  catch (IOException ex) {
    //TODO: Handle the exception
  }
  return bytes;
}
    
public Object toObject (byte[] bytes)
{
  Object obj = null;
  try {
    ByteArrayInputStream bis = new ByteArrayInputStream (bytes);
    ObjectInputStream ois = new ObjectInputStream (bis);
    obj = ois.readObject();
  }
  catch (IOException ex) {
    //TODO: Handle the exception
  }
  catch (ClassNotFoundException ex) {
    //TODO: Handle the exception
  }
  return obj;
}
  

EditText에 Action Listener 다는 법

Posted by epicdev Archive : 2011. 9. 23. 03:07
mOutEditText.setOnEditorActionListener(mWriteListener);

private TextView.OnEditorActionListener mWriteListener = new TextView.OnEditorActionListener()
{
public boolean onEditorAction(TextView view, int actionId, KeyEvent event)
{
// If the action is a key-up event on the return key, send the message
if(actionId == EditorInfo.IME_NULL && event.getAction() == KeyEvent.ACTION_UP)
{
String message = view.getText().toString();
sendMessage(message);
}
if(D) Log.i(TAG, "END onEditorAction");
return true;
}
};

출처: http://developer.android.com/reference/android/widget/TextView.OnEditorActionListener.html

public static interface

TextView.OnEditorActionListener

android.widget.TextView.OnEditorActionListener

Class Overview

Interface definition for a callback to be invoked when an action is performed on the editor.

Summary

Public Methods
abstract boolean onEditorAction(TextView v, int actionId, KeyEvent event)
Called when an action is being performed.

Public Methods

public abstract boolean onEditorAction (TextView v, int actionId, KeyEvent event)

Since: API Level 3

Called when an action is being performed.

Parameters
vThe view that was clicked.
actionIdIdentifier of the action. This will be either the identifier you supplied, or EditorInfo.IME_NULL if being called due to the enter key being pressed.
eventIf triggered by an enter key, this is the event; otherwise, this is null.
Returns
  • Return true if you have consumed the action, else false.
 
  

안드로이드 액티비티의 생명주기

Posted by epicdev Archive : 2011. 9. 23. 02:28
출처: http://developer.android.com/reference/android/app/Activity.html

 Activity Lifecycle

Activities in the system are managed as an activity stack. When a new activity is started, it is placed on the top of the stack and becomes the running activity -- the previous activity always remains below it in the stack, and will not come to the foreground again until the new activity exits.

An activity has essentially four states:

  • If an activity in the foreground of the screen (at the top of the stack), it is active or running.
  • If an activity has lost focus but is still visible (that is, a new non-full-sized or transparent activity has focus on top of your activity), it is paused. A paused activity is completely alive (it maintains all state and member information and remains attached to the window manager), but can be killed by the system in extreme low memory situations.
  • If an activity is completely obscured by another activity, it is stopped. It still retains all state and member information, however, it is no longer visible to the user so its window is hidden and it will often be killed by the system when memory is needed elsewhere.
  • If an activity is paused or stopped, the system can drop the activity from memory by either asking it to finish, or simply killing its process. When it is displayed again to the user, it must be completely restarted and restored to its previous state.

The following diagram shows the important state paths of an Activity. The square rectangles represent callback methods you can implement to perform operations when the Activity moves between states. The colored ovals are major states the Activity can be in.
 


There are three key loops you may be interested in monitoring within your activity:

  • The entire lifetime of an activity happens between the first call to onCreate(Bundle) through to a single final call to onDestroy(). An activity will do all setup of "global" state in onCreate(), and release all remaining resources in onDestroy(). For example, if it has a thread running in the background to download data from the network, it may create that thread in onCreate() and then stop the thread in onDestroy().
  • The visible lifetime of an activity happens between a call to onStart() until a corresponding call to onStop(). During this time the user can see the activity on-screen, though it may not be in the foreground and interacting with the user. Between these two methods you can maintain resources that are needed to show the activity to the user. For example, you can register a BroadcastReceiver in onStart() to monitor for changes that impact your UI, and unregister it in onStop() when the user an no longer see what you are displaying. The onStart() and onStop() methods can be called multiple times, as the activity becomes visible and hidden to the user.
  • The foreground lifetime of an activity happens between a call to onResume() until a corresponding call to onPause(). During this time the activity is in front of all other activities and interacting with the user. An activity can frequently go between the resumed and paused states -- for example when the device goes to sleep, when an activity result is delivered, when a new intent is delivered -- so the code in these methods should be fairly lightweight.

The entire lifecycle of an activity is defined by the following Activity methods. All of these are hooks that you can override to do appropriate work when the activity changes state. All activities will implement onCreate(Bundle) to do their initial setup; many will also implementonPause() to commit changes to data and otherwise prepare to stop interacting with the user. You should always call up to your superclass when implementing these methods.

 public class Activity extends ApplicationContext {
     
protected void onCreate(Bundle savedInstanceState);

     
protected void onStart();
     
     
protected void onRestart();

     
protected void onResume();

     
protected void onPause();

     
protected void onStop();

     
protected void onDestroy();
 
}
 

In general the movement through an activity's lifecycle looks like this:

MethodDescriptionKillable?Next
onCreate()Called when the activity is first created. This is where you should do all of your normal static set up: create views, bind data to lists, etc. This method also provides you with a Bundle containing the activity's previously frozen state, if there was one.

Always followed by onStart().

No onStart()
     onRestart()Called after your activity has been stopped, prior to it being started again.

Always followed by onStart()

No onStart()
onStart()Called when the activity is becoming visible to the user.

Followed by onResume() if the activity comes to the foreground, or onStop() if it becomes hidden.

No onResume()or onStop()
     onResume()Called when the activity will start interacting with the user. At this point your activity is at the top of the activity stack, with user input going to it.

Always followed by onPause().

No onPause()
onPause()Called when the system is about to start resuming a previous activity. This is typically used to commit unsaved changes to persistent data, stop animations and other things that may be consuming CPU, etc. Implementations of this method must be very quick because the next activity will not be resumed until this method returns.

Followed by either onResume() if the activity returns back to the front, or onStop() if it becomes invisible to the user.

Pre-HONEYCOMB onResume()or
onStop()
onStop()Called when the activity is no longer visible to the user, because another activity has been resumed and is covering this one. This may happen either because a new activity is being started, an existing one is being brought in front of this one, or this one is being destroyed.

Followed by either onRestart() if this activity is coming back to interact with the user, or onDestroy() if this activity is going away.

Yes onRestart()or
onDestroy()
onDestroy()The final call you receive before your activity is destroyed. This can happen either because the activity is finishing (someone called finish() on it, or because the system is temporarily destroying this instance of the activity to save space. You can distinguish between these two scenarios with the isFinishing()method. Yes nothing

Note the "Killable" column in the above table -- for those methods that are marked as being killable, after that method returns the process hosting the activity may killed by the system at any time without another line of its code being executed. Because of this, you should use theonPause() method to write any persistent data (such as user edits) to storage. In addition, the method onSaveInstanceState(Bundle) is called before placing the activity in such a background state, allowing you to save away any dynamic instance state in your activity into the given Bundle, to be later received in onCreate(Bundle) if the activity needs to be re-created. See the Process Lifecycle section for more information on how the lifecycle of a process is tied to the activities it is hosting. Note that it is important to save persistent data inonPause() instead of onSaveInstanceState(Bundle) because the latter is not part of the lifecycle callbacks, so will not be called in every situation as described in its documentation.

Be aware that these semantics will change slightly between applications targeting platforms starting with HONEYCOMB vs. those targeting prior platforms. Starting with Honeycomb, an application is not in the killable state until its onStop() has returned. This impacts whenonSaveInstanceState(Bundle) may be called (it may be safely called after onPause() and allows and application to safely wait until onStop() to save persistent state.

For those methods that are not marked as being killable, the activity's process will not be killed by the system starting from the time the method is called and continuing after it returns. Thus an activity is in the killable state, for example, between after onPause() to the start ofonResume(). 

  
플래시 파일 처럼 주소를 입력하면 곧바로 브라우저에서 실행되어버리는 파일을 다운 받고 싶을 때
저는 html 파일을 만들어서 다운을 받곤 합니다.
다른 분들도 많이들 그렇게 하시리라 생각합니다.
html 파일 만드는거 그다지 힘든일은 아니지만서도 상당히 귀찮은 작업입니다.
그래서 자바스크립트를 이용하여 웹 브라우저 주소창에서 곧바로 다운로드 링크를 만들어서
파일을 다운받을 수 있는 방법을 알려드리도록 하겠습니다.

 아래의 사진을 보시면 주소창에 자바 스크립트로
문서를 생성해 주고 있습니다.
입력한 것은
javascript: document.write("<a href = http://google.com>Google</a>")
입니다.
(html을 모르시는 분들께:
단순히 http://google.com을 링크를 원하는 주소로 대체해 주시고 Google은 임의의 문자열로 대체해 주시면 됩니다.)


이렇게 입력하시고 엔터를 치시면 아래처럼 document.write에 입력한 html 코드 대로 html 문서가 만들어집니다.
링크를 클릭하시면 곧바로 구글로 이동됩니다.
이런 방식으로 다운로드 안되는 파일들을 링크로 만들어 주시고 난 후에
링크를 다른이름으로 저장하기 하시면 파일을 곧바로 다운로드 받으실 수 있습니다.
플래시 파일 같은 것들을 다운받으실 때 유용합니다.

  
about:blank 창이나 웹브라우저 시작창, 혹은 특정 사이트에서 입력하게 되면 안 먹힐수도 있으니
안 먹히신다면 다른 아무 사이트나 띄워놓고 주소창에 입력 하시면 됩니다.

  

비동기 I/O (Asynchronous I/O)

Posted by epicdev Archive : 2011. 9. 21. 16:03
출처: http://en.wikipedia.org/wiki/Asynchronous_I/O

Asynchronous I/O

From Wikipedia, the free encyclopedia

Asynchronous I/O, or non-blocking I/O, is a form of input/output processing that permits other processing to continue before the transmission has finished.

Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data. An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write; this is often orders of magnitude slower than the switching of electric current. For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles.

A simple approach to I/O would be to start the access and then wait for it to complete. But such an approach (called synchronous I/O or blocking I/O) would block the progress of a program while the communication is in progress, leaving system resources idle. When a program makes many I/O operations, this means that the processor can spend almost all of its time idle waiting for I/O operations to complete.

Alternatively, it is possible, but more complicated to predict, to start the communication and then perform processing that does not require that the I/O has completed. This approach is called asynchronous input/output. Any task that actually depends on the I/O having completed (this includes both using the input values and critical operations that claim to assure that a write operation has been completed) still needs to wait for the I/O operation to complete, and thus is still blocked, but other processing that does not have a dependency on the I/O operation can continue.

Many operating system functions exist to implement asynchronous I/O at many levels. In fact, one of the main functions of all but the most rudimentary of operating systems is to perform at least some form of basic asynchronous I/O, though this may not be particularly apparent to the operator or programmer. In the simplest software solution, the hardware device status is polled at intervals to detect whether the device is ready for its next operation. (For example the CP/M operating system was built this way. Its system call semantics did not require any more elaborate I/O structure than this, though most implementations were more complex, and thereby more efficient.) Direct memory access (DMA) can greatly increase the efficiency of a polling-based system, and hardware interrupts can eliminate the need for polling entirely. Multitasking operating systems can exploit the functionality provided by hardware interrupts, whilst hiding the complexity of interrupt handling from the user. Spooling was one of the first forms of multitasking designed to exploit asynchronous I/O. Finally, multithreading and explicit asynchronous I/O APIs within user processes can exploit asynchronous I/O further, at the cost of extra software complexity.

Asynchronous I/O is used to improve throughput, latency, and/or responsiveness.

Contents

 [hide]

[edit]Forms

All forms of asynchronous I/O open applications up to potential resource conflicts and associated failure. Careful programming (often using mutual exclusionsemaphores, etc.) is required to prevent this.

When exposing asynchronous I/O to applications there are a few broad classes of implementation. The form of the API provided to the application does not necessarily correspond with the mechanism actually provided by the operating system; emulations are possible. Furthermore, more than one method may be used by a single application, depending on its needs and the desires of its programmer(s). Many operating systems provide more than one of these mechanisms, it is possible that some may provide all of them.

[edit]Process

Available in early Unix. In a multitasking operating system, processing can be distributed across different processes, which run independently, have their own memory, and process their own I/O flows; these flows are typically connected in pipelines. Processes are fairly expensive to create and maintain, so this solution only works well if the set of processes is small and relatively stable. It also assumes that the individual processes can operate independently, apart from processing each other's I/O; if they need to communicate in other ways, coordinating them can become difficult.

An extension of this approach is dataflow programming, which allows more complicated networks than just the chains that pipes support.

[edit]Polling

Variations:

  • Error if it cannot be done yet (reissue later)
  • Report when it can be done without blocking (then issue it)

Available in traditional Unix. Its major problem is that it can waste CPU time polling repeatedly when there is nothing else for the issuing process to do, reducing the time available for other processes. Also, because a polling application is essentially single-threaded it may be unable to fully exploit I/O parallelism that the hardware is capable of.

[edit]Select(/poll) loops

Available in BSD Unix, and almost anything else with a TCP/IP protocol stack that either utilizes or is modeled after the BSD implementation. A variation on the theme of polling, a select loop uses the select system call to sleep until a condition occurs on a file descriptor (e.g., when data is available for reading), a timeout occurs, or a signal is received (e.g., when a child process dies). By examining the return parameters of the select call, the loop finds out which file descriptor has changed and executes the appropriate code. Often, for ease of use, the select loop is implemented as an event loop, perhaps using callback functions; the situation lends itself particularly well to event-driven programming.

While this method is reliable and relatively efficient, it depends heavily on the Unix paradigm that "everything is a file"; any blocking I/O that does not involve a file descriptor will block the process. The select loop also relies on being able to involve all I/O in the central select call; libraries that conduct their own I/O are particularly problematic in this respect. An additional potential problem is that the select and the I/O operations are still sufficiently decoupled that select's result may effectively be a lie: if two processes are reading from a single file descriptor (arguably bad design) the select may indicate the availability of read data that has disappeared by the time that the read is issued, thus resulting in blocking; if two processes are writing to a single file descriptor (not that uncommon) the select may indicate immediate writability yet the write may still block, because a buffer has been filled by the other process in the interim, or due to the write being too large for the available buffer or in other ways unsuitable to the recipient.

The select loop doesn't reach the ultimate system efficiencies possible with, say, the completion queues method, because the semantics of the select call, allowing as it does for per-call tuning of the acceptable event set, consumes some amount of time per invocation traversing the selection array. This creates little overhead for user applications that might have open one file descriptor for the windowing system and a few for open files, but becomes more of a problem as the number of potential event sources grows, and can hinder development of many-client server applications; other asynchronous methods may be noticeably more efficient in such cases. Some Unixes provide system-specific calls with better scaling; for example, epoll in Linux (that fills the return selection array with only those event sources on which an event has occurred), kqueue in FreeBSD, and/dev/poll in Solaris.

SVR3 Unix provided the poll system call. Arguably better-named than select, for the purposes of this discussion it is essentially the same thing. SVR4 Unixes (and thus POSIX) offer both calls.

[edit]Signals (interrupts)

Available in BSD and POSIX Unix. I/O is issued asynchronously, and when it is complete a signal (interrupt) is generated. As in low-level kernel programming, the facilities available for safe use within the signal handler are limited, and the main flow of the process could have been interrupted at nearly any point, resulting in inconsistent data structures as seen by the signal handler. The signal handler is usually not able to issue further asynchronous I/O by itself.

The signal approach, though relatively simple to implement within the OS, brings to the application program the unwelcome baggage associated with writing an operating system's kernel interrupt system. Its worst characteristic is that every blocking (synchronous) system call is potentially interruptible; the programmer must usually incorporate retry code at each call.

[edit]Callback functions

Available in Mac OS (pre-Mac OS X), VMS and Windows. Bears many of the characteristics of the signal method as it is fundamentally the same thing, though rarely recognized as such. The difference is that each I/O request usually can have its own completion function, whereas the signalsystem has a single callback.

A potential problem is that stack depth can grow unmanageably, as an extremely common thing to do when one I/O is finished is to schedule another. If this should be satisfied immediately, the first callback is not 'unwound' off the stack before the next one is invoked. Systems to prevent this (like 'mid-ground' scheduling of new work) add complexity and reduce performance.

[edit]Light-weight processes or threads

Light-weight processes (LWPs) or threads are available in more modern Unixes, originating in Plan 9. Like the process method, but without the data isolation that hampers coordination of the flows. This lack of isolation introduces its own problems, usually requiring kernel-provided synchronization mechanisms and thread-safe libraries. Each LWP or thread itself uses traditional blocking synchronous I/O. The requisite separate per-thread stack may preclude large-scale implementations using very large numbers of threads. The separation of textual (code) and time (event) flows provides fertile ground for errors.

This approach is also used in the Erlang programming language runtime system. The Erlang virtual machine uses asynchronous IO using a small pool of only a few threads or sometimes just one process, to handle IO from up to millions of Erlang processes. IO handling in each process is written mostly using blocking synchronous I/O. This way high performance of asynchronous I/O is merged with simplicity of normal IO. Many IO problems in Erlang are mapped to message passing, which can be easily processed using built-in selective receive.

[edit]Completion queues/ports

Available in Microsoft WindowsSolaris and DNIX. I/O requests are issued asynchronously, but notifications of completion are provided via a synchronizing queue mechanism in the order they are completed. Usually associated with a state-machine structuring of the main process (event-driven programming), which can bear little resemblance to a process that does not use asynchronous I/O or that uses one of the other forms, hampering code reuse. Does not require additional special synchronization mechanisms or thread-safe libraries, nor are the textual (code) and time (event) flows separated.

[edit]Event flags

Available in VMS. Bears many of the characteristics of the completion queue method, as it is essentially a completion queue of depth one. To simulate the effect of queue 'depth', an additional event flag is required for each potential unprocessed (but completed) event, or event information can be lost. Waiting for the next available event in such a clump requires synchronizing mechanisms that may not scale well to larger numbers of potentially parallel events.

[edit]Implementation

The vast majority of general-purpose computing hardware relies entirely upon two methods of implementing asynchronous I/O: polling and interrupts. Usually both methods are used together, the balance depends heavily upon the design of the hardware and its required performance characteristics. (DMA is not itself another independent method, it is merely a means by which more work can be done per poll or interrupt.)

Pure polling systems are entirely possible, small microcontrollers (such as systems using the PIC) are often built this way. CP/M systems could also be built this way (though rarely were), with or without DMA. Also, when the utmost performance is necessary for only a few tasks, at the expense of any other potential tasks, polling may also be appropriate as the overhead of taking interrupts may be unwelcome. (Servicing an interrupt requires time [and space] to save at least part of the processor state, along with the time required to resume the interrupted task.)

Most general-purpose computing systems rely heavily upon interrupts. A pure interrupt system may be possible, though usually some component of polling is also required, as it is very common for multiple potential sources of interrupts to share a common interrupt signal line, in which case polling is used within the device driver to resolve the actual source. (This resolution time also contributes to an interrupt system's performance penalty. Over the years a great deal of work has been done to try to minimize the overhead associated with servicing an interrupt. Current interrupt systems are rather lackadaisical when compared to some highly-tuned earlier ones, but the general increase in hardware performance has greatly mitigated this.)

Hybrid approaches are also possible, wherein an interrupt can trigger the beginning of some burst of asynchronous I/O, and polling is used within the burst itself. This technique is common in high-speed device drivers, such as network or disk, where the time lost in returning to the pre-interrupt task is greater than the time until the next required servicing. (Common I/O hardware in use these days relies heavily upon DMA and large data buffers to make up for a relatively poorly-performing interrupt system. These characteristically use polling inside the driver loops, and can exhibit tremendous throughput. Ideally the per-datum polls are always successful, or at most repeated a small number of times.)

At one time this sort of hybrid approach was common in disk and network drivers where there was not DMA or significant buffering available. Because the desired transfer speeds were faster even than could tolerate the minimum four-operation per-datum loop (bit-test, conditional-branch-to-self, fetch, and store), the hardware would often be built with automatic wait state generation on the I/O device, pushing the data ready poll out of software and onto the processor's fetch or store hardware and reducing the programmed loop to two operations. (In effect using the processor itself as a DMA engine.) The 6502 processor offered an unusual means to provide a three-element per-datum loop, as it had a hardware pin that, when asserted, would cause the processor's Overflow bit to be set directly. (Obviously one would have to take great care in the hardware design to avoid overriding the Overflow bit outside of the device driver!)

[edit]Synthesis

Using only these two tools (polling, and interrupts), all the other forms of asynchronous I/O discussed above may be (and in fact, are) synthesized.

In an environment such as a Java Virtual Machine (JVM), asynchronous I/O can be synthesized even though the environment the JVM is running in may not offer it at all. This is due to the interpreted nature of the JVM. The JVM may poll (or take an interrupt) periodically to institute an internal flow of control change, effecting the appearance of multiple simultaneous processes, at least some of which presumably exist in order to perform asynchronous I/O. (Of course, at the microscopic level the parallelism may be rather coarse and exhibit some non-ideal characteristics, but on the surface it will appear to be as desired.)

That, in fact, is the problem with using polling in any form to synthesize a different form of asynchronous I/O. Every CPU cycle that is a poll is wasted, and lost to overhead rather than accomplishing a desired task. Every CPU cycle that is not a poll represents an increase in latency of reaction to pending I/O. Striking an acceptable balance between these two opposing forces is difficult. (This is why hardware interrupt systems were invented in the first place.)

The trick to maximize efficiency is to minimize the amount of work that has to be done upon reception of an interrupt in order to awaken the appropriate application. Secondarily (but perhaps no less important) is the method the application itself uses to determine what it needs to do.

Particularly problematic (for application efficiency) are the exposed polling methods, including the select/poll mechanisms. Though the underlying I/O events they are interested in are in all likelihood interrupt-driven, the interaction to this mechanism is polled and can consume a large amount of time in the poll. This is particularly true of the potentially large-scale polling possible through select (and poll). Interrupts map very well to Signals, Callback functions, Completion Queues, and Event flags, such systems can be very efficient.

[edit]References

[edit]See also

[edit]External links

 
  

블록킹 소켓 vs 논 블록킹 소켓

Posted by epicdev Archive : 2011. 9. 21. 15:19
출처: http://kin.naver.com/qna/detail.nhn?d1id=1&dirId=1040201&docId=125630325

- 블록킹 소켓에 대한 설명

블록킹 소켓을 사용하면 한 타임에 하나의 접속만 처리가 가능 합니다.

예를들어, 블록킹 소켓을 사용한 서버라고 가정하는 경우 서버의 역할상 동시다발적으로 복수의 클라이언트가 접속을 하여 서비스를 이용하게 되는데 블록킹 소켓을 사용하면 순차적으로 클라이언트의 접속을 받아들여 작업하고 해당 작업이 완료 돼야 다음 접속에 대한 작업을 진행하게 됩니다.

 

즉, 10개의 클라이언트가 동시에 1개 서버에 접속을 한 경우 1번째 클라이언트에 대한 작업이 완료되면 2번째 클라이언트에 대한 작업을 진행하고 완료되면, 3번째 클라이언트 작업 완료되면, 4번째 클라이언트 작업.... 이렇게 순차적으로 작업을 진행하게 되며 최종 10번째로 접속한 클라이언트는 앞에 9개의 클라이언트에 대한 작업이 끝날때 까지 대기를 해야 하는 형태가 됩니다. 거기다 각 클라이언트에 대한 작업시간이 언제끝날지 모른다면 뒤에 대기중인 클라이언트들은 한없이 대기할수 밖에 없는 상태가 되지요.

 

- 블록킹 소켓 + 멀티스레드 방식 설명

그래서 이러한 현상을 극복하기 위해 스레드를 사용하였습니다. 각 클라이언트들이 접속시 1개 클라이언트당 1개의 스레드를 생성하여 멀티스레드 방식으로 운용을 하게 됩니다. 스레드는 동시에(정확히는 아니지만..) 작업이 가능 하므로 클라이언트가 10개가 들어오던 20개가 들어오던 거의 동시에 작업이 이루어지게 되고 블록킹 소켓을 사용하지만 스레드의 장점을 활용한 방법이라 볼수 있습니다.

 

하지만, 이 방법도 완전한 방법이 아닙니다. 운영체제마다 스레드의 갯수가 제한(설정가능..)되어 있는데 만약, 제한된 스레드의 갯수보다 많은 클라이언트들이 접속해야 한다면 문제가 발생하게 됩니다. (가용성 부재)

뿐만 아니라, 스레드가 작동되기 위해 컨텍스트 스위칭이 계속적으로 발생하게 되는데 스레드가 많은면 많을수록 성능이 하락되어 최종에는 스레드를 사용하는 의미가 사라지게 됩니다. (성능하락)

 

- 논-블록킹 소켓 설명

위와 같은 문제점을 극복하기 위해 한개 스레드에서 여러 클라이언트의 작업을 번갈아가면서 작업을 하기 위한 모델을 만들었는데 그게 바로 논-블록킹 모델 입니다.

논-블록킹 소켓을 사용하면 접속한 클라이언트의 갯수와 상관없이 한개의 스레드에서 번갈아가면서 작업을 하게 되며 이벤트 방식과 흡사한 방식으로 작동됩니다.

 

3개의 클라이언트가 서버에 접속해 작업을 하는 경우 1번째 클라이언트가 보낸 데이터가 있다면 데이터를 받고 3번째 클라이언트에게 데이터를 전송할수 있는 상황인 경우 데이터를 전송하는 형태가 됩니다.

 

2. 그리고 클라이언트가 보통 30개 정도 붙고 끈임없이 데이터를 주고 받아야 한다면 둘중 어느게 안정적이고 빠른가요?

블록킹 소켓 + 멀티스레드 방식의 경우 스레드 갯수와 컨텍스트 스위칭과 관련된 단점이 있는 반면에 프로그램 제작하기가 상대적으로 쉽습니다. 클라이언트가 30개라면 스레드가 30개 정도 생성된다고 봐야 하는데 개인적으로 스레드 30개정도는 문제될 부분이 많지 않다고 생각됩니다. 만약, 30개보다 더 많은 클라이언트의 갯수가 접속해야 한다면 가용성이나 성능 부분을 위해 논-블록킹으로 작업해야 될겁니다.

논-블록킹 모델은 많은 장점이 있는데 불구하고 단점은 상대적으로 프로그램 작성하기가 힘듭니다.

그리고, 논-블록킹 모델의 경우도 분명 병목현상이 발생되는 지점이 있기에 서비스와 관련된 부분에 스레드를 사용해야하는등 신경써줘야 하는 부분들이 더 많습니다.

선택은 본인이 하겠지만 저라면 블록킹 소켓 + 멀티스레드로 가겠습니다.

아. 참고로 더 말씀드리지만 논-블록킹은 블록킹보다 상대적으로 가용성이 높지만 속도가 빠른것은 아닙니다. 서버가 대량의 클라이언트를 받아들이고 안정적인 서비스를 하고자 하는 경우 논-블록킹의 선택하는게 현명하지 않나 생각됩니다. 
  
 «이전 1 ··· 11 12 13 14 15 16 17  다음»